AFAIK, atproto is primarily designed to support multiple distinct clients over shared data, but I also wonder if it could help with composing more granular views within a client. I previously worked on a browser extension for Twitter, and data scraping was a major challenge - which seems easier building on an open protocol like atproto.
Sorry we didn’t mention — it is on our radar but we ran out of space and had to omit lots of good prior art..
I should also mention btw that Bluesky user-configurable feeds is a perfect example of a gentle slope from user to creator!
Already with AI we are seeing a huge uptick in people's expectations that agents operate across apps. The app is losing its monopoly of power, is losing its primacy as the thing the user touches. See How Alexa Dropped the Ball on Being the Top Conversational System on the Planet which is an article about a lot of factors, many more Conway's Law & corporate fiefdom oriented, but which touches repeatedly in the need to thread experiences across applications, across domains, and where the historal "there's an app for that" paradigm is giving way, is insufficient. https://www.mihaileric.com/posts/how-alexa-dropped-the-ball-... https://news.ycombinator.com/item?id=40659281
AI again is an interesting change agent in other ways. As well as scripting & MCP'ing existing tools/apps, the ability to rapidly craft experiences is changing so quickly. Home-Cooked Software and Barefoot Developers talks so directly to how this could enable vastly more people to be crafting their own experiences. I expect that over time frameworks/libraries themselves adapt, that the matter of computing shifts from targeting expert developer communities who use extensive community knowledge to do their craft, to forms that are deliberately de-esotericized, crafted in even more explicit compositional manners that are more directly malleable, because ai will be better at building systems with more overt declarative pieces. https://maggieappleton.com/home-cooked-software/ https://news.ycombinator.com/item?id=40633029
Right now the change is symbolic more than practical, but I also loved seeing Apple's new Liquid Glass design system yesterday, in part because it so clearly advances what Material set out to do: constructs software of multiple different layers, with the content itself being the primary app surface. And in Liquid Glass's case extending that app surface even further, making it practically full screen always, with tools and UI merely refractive layers above the content. This de-emphasizes the compute, makes makes the content the main thing, by removing the boxes and frames of encirclement that once defined the app's space, giving way to pure content, making the buttons mere layers floating above, portals of function floating above, the content below. In practice it's not substantially different than what came before, yet, but it feels like the tools are more incidental, a happenstance layer of options above the content, and is suggestive to me that the tools could change or swap. https://www.apple.com/newsroom/2025/06/apple-introduces-a-de... https://news.ycombinator.com/item?id=44226612
There's such a long arch here. And there's so many reasons why companies love and enjoy having total power over their domain, why they want to be the sole arbiter of experience, with no one else having any say. We've seen collapses of interesting bold intertwingular era API-hype hopeful projects, like Spotify desktop shutting down the amazing incredible JavaScript Apps SDK so long ago (2011-2014). https://techcrunch.com/2014/11/13/rip-spotify-apps-rip-sound...
Folks love to say that this is what the market wants, that there is convenience and freedom in not having any choices, in not having to compose tools, in everything being provided whole and unchanging. I'd love to test that thesis, but I don't think we have evidence now: 99.999%+ of software is built in the totalistic form, tablets carved and passed down to mankind for us to use as directed (or risk anti-circumvention felony charges!). We haven't really been running the experiments to see what would be good for the world, what would make us a better happier more successful world. Whose going to foot the bill, whose going to abandon control over their users?
And it's not something you can do alone. The really malleable software revolution requires not individual changes, individual apps adding plugins or scripting. The real malleable software shift is when the whole experience is built to be malleable. The general systems research for operating systems to host not just applications, but to host views and tools and data flow, history event sourcing and transactions (perhaps). No one piece of software can ever adequately be malleable software on its own: real malleable software requires malleable paradigms of computing, upon which experiences, objects, tools compose.
It all sounds so far off and far fetched. But where we are now is a computing trap, one posited around a philosophy of singularness and unconnectedness delivered down to us users/consumers (a power relationship few want to change!). The limitations of the desktop application model, as it's related and been morphosed into mobile apps, into watch apps, feels like an ever more cumbersome limit, a gate on what is possible. I feel the dual strongly: I'm with those pessimists saying the malleable software world is impossible, that we can never make the shift, I cannot see how it ever could become, and yet I don't think we can stay here forever, I think the limitations are too great, and the opportunity for a better opener computing to awaken is too interesting and too powerful for that possibility to lie slumbering forever. I want to believe the future is exciting, in good ways, in re-opening ways, and although I can hardly see who would fund better or why, and although the challenge is enormous, the project or rebuilding mankind's agency within the technological society feels obligatory necessary & inevitable, and my soul soars at the prospect. Malleable software: thus we all voyage towards computing.
I've spent countless hours thinking about how to build a business that would solve some class of problems my friends have encountered and I've almost always had to conclude that the business would probably not be profitable, so their ideas were never tested.
Now, with a 2025 chatbot, I can confidently estimate the feasibility of a basic project in minutes and we can build the thing together in hours. No one needs to make a profit, build a new business, or commit to ongoing maintenance. Locally crafted software is taking off dramatically and I think it will become the new normal.
Coauthor here -- did you catch our section on AI? [1]
We emphatically agree with you that AI is already enabling new kinds of local software crafting. That's one reason we are excited about doing this work now!
At the same time, AI code generation doesn't solve the structural problems -- our whole software world was built assuming people can't code! We think things will really take off once we reorient the OS around personal tools, not prefabricated apps. That's what the rest of the essay is about.
[1] https://www.inkandswitch.com/essay/malleable-software/#ai-as...
My belief is that it is happening already: local software crafting is happening now, before the tools are ready. People aren't going to wait for good APIs to exist; people will MacGyver things together. They'll scrape screens (sometimes with OCR), run emulated devices in the cloud, and call APIs incorrectly and abusively until they get what they need. They won't ask for permission.
A lot of software developers may transition from building to cleaning up knots.
Yes, absolutely, even trivial things like colors can rarely be changed, let alone more involved UI parts.
> Inflexible electronic medical records systems are driving doctors to burnout. > When different users have different needs, a centralized development team can’t possibly address everyone’s problems.
That's not the main issue, which is that they don't address *anyone's* problems well since actual users have very little power here, and the devs are far removed from actual user experience. Like that examples of filling useless fields - that serves no one!
> when a developer does try to cram too many solutions into a single product, the result is a bloated mess.
Unless it's organized well? There is no inherent reason many equals mess or even bloat (eg, if solutions are modules you can ignore/not even install, your app with only the solutions you care about has no bloat)
But in general, very laudable goals, would be very empowering for many users to live in a dream world where software is built based on such principles...
https://news.ycombinator.com/item?id=44118159
but which didn't seem to get much traction is:
https://pontus.granstrom.me/scrappy/
but it pretty much only works for JavaScript programmers and their friends (or folks interested in learning JavaScript).
Other tools which I'd like to put forward as meriting discussion in this context include:
- LyX --- making new layout files allows a user to create a customized tool for pretty much any sort of document they might wish to work on --- a front-end for LaTeX
- pyspread --- every cell being either a Python program or the output of a program, and the possibility of cells being an image allows one to do pretty much anything without the overhead of making or reading a file
- Ipe https://ipe.otfried.org/ --- an extensible drawing program, this really needs a simpler mechanism for that and I'd love to see a tool in the vector drawing space which addressed that --- perhaps the nascent https://graphite.rs/ ?
The Qt/KDE world has (imho) some of the best quality software I've used, and is, astonishingly, relatively unpopular compared to FOSS competitors.
Ipe now has a web interface (through the magic of Qt) and I remember there was a plan to make one for LyX, though if it ever happened, I couldn't find it.
In the section on dynamic documents towards the end of our essay, we show several of our lab’s own takes on this category of tool, including an example of integrating AI as an optional layer over a live programmable document.
I need an interactive tool for programming such (or I need to buckle down and implement the METAFONT algorithm in my current project).
And Microsoft has had OLE -- which is sort of analogous to the object-model portion of AppleScript -- for ages.
1. Use of FOSS will be helpful, since it can be improved if something is wrong with it.
2. UI controls are objects with data models like any others are, so even if a API is not provided by a program, these UI controls, and the data associated with them, can be added into scripts like any other API can be.
3. Capabilities are needed for I/O and proxy capabilities can be created and used. Even if the program does not expect the I/O to be filtered or modified, the system forces that it can be done anyways (and the command shell in the system is designed to allow this, too).
4. This metadata is required even for a program to start (due to the way the I/O is working).
FileMaker/Microsoft Access/HyperCard (no longer exists)
Macromedia Flash (no longer exists)
Spreadsheets (like Microsoft Excel, unfortunately Airtable isn't there yet?)
Wix (maybe? surely there are better alternatives)
Zapier (or an open source version)
Then move on to what programming could/should be: htmx
Firebase/RethinkDB (no longer maintained?)
Erlang/Go
GNU Octave/MATLAB
Lisp/Scheme/PostScript/Clojure
Only then, after having full exposure to what computers are capable of and how fast they really are, should students begin studying the antipatterns that have come to dominate tech: React
Ruby on Rails
Javascript (warts of the modern version with classes and async/await, not the original)
C#/Java/C++/Rust (the dangers of references/pointers and imperative programming)
iOS/Android (Swift vs Objective-C, Kotlin vs Java, ill-conceived APIs, etc)
I realize this last list is contentious, but I could go into the downsides of each paradigm at length. I'm choosing not to.Since we can't fix the market domination of multibillion companies who don't care about this stuff on any reasonable timescale, maybe we can pull the wool off the children's eyes and give them the tools to tear down the status quo.
I suspect that AI and geopolitical forces may take this decision away from us though. It may already be too late. In that case, we could start with spiritual teachings around philosophy, metaphysics and wisdom to give them the tools needed to work with nonobjective and nondeterministic tech that's indistinguishable from magic.
But if the class is computer science at a university, then the students want to go deeper and learn how to improve upon and compete with the existing tools. They need the theory first, which means Lisp (or a derivative) and an imperative language.
In my freshman year of college, I thought I was hot stuff because I knew C++, so I tried to place out of some of the 100 level classes. But the test seemed strange to me, focusing more on abstractions than syntax. I don't remember if I failed, but I don't think I placed out of anything. One of my first classes was on Lisp, specifically Scheme, and it completely blew my mind and forever changed how I look at programming.
Just before I graduated in 1999, they started transitioning to Java, because the web was so popular. But most of us thought that was a mistake. I don't know if they ever switched back to Lisp.
On a funny note, I took that whole class without realizing that Lisp statements could be broken up into separate lines. Or more accurately that each line just declares equivalences that get reduced down to their simplest form by the runtime. So I wrote all of the homework assignments as one giant function of nested parentheses, even for some of the more complex tasks on sorting primitives like lists and trees. I picture the graders shaking their heads in a mix of frustration and awe hahaha.
Here's my premise - if you use something like a game engine, say Unity, and Unreal, you basically have the ability to modify everything in real time, and heve it reflected inside the editor immediately - you could change textures, models, audio, even shaders (which are a kind of code), and have the editor reload just that tiny resource instantaneously.
But not code code - for some reason computer code must go through a compilation, optimization and linking process, creating a monolithic executable piece of code, that cannot be directly modified. This is even true of dynamic languages like Js/Ts, which support modification on the fundamental level, yet somehow lose this ability when using advanced toolchains.
Which is weird since most compilers/OSes support this dynamism at a fundamental level - the machine interface unit of the C compiler is a function, the replacement unit in most OSes is a dynamic library, a collection of said functions - yet changing this at runtime is almost unheard of and most of the times suicidal.
This is because of a couple problems - memory allocation - replacing parts of a program at runtime can lead to leaks if we don't clean that up, resource allocation - this once again can be solved by tying resource lifetimes to either outside factors, or the lifetime of the function or its containing unit.
A demonstrated analog of this is OS processes, which can be terminated abruptly, their binaries replaced without fear of resource leakage.
The final problem of data corruption can be solved by making such program parts stateless, and making them use a store with atomic transactions.
I have a pretty good idea on how to build such an environment on the low level, whose core idea is having process-like isolation barriers isolating small pieces of programs, and an object database-like datastore that can never be corrupted due to transactional changes (which can be rolled back, enabling stuff like time-travel debugging). Said processes could communicate either via messages/events or sharing parts of their memory.
Such a system would allow you to fearlessly change any part of the source code of a running application at runtime - even if you mess up the code of a said component - say event to a point that it doesn't compile - all that would happen would that single component would cease to function without affecting the rest of the app.
Not a problem in a toy app, but in something like a huge program, it can be a PITA to reload everything and get back to where you were.
but that's not true: smalltalk, lisp, pike, erlang and some other languages allow you to change code at runtime, only requiring the recompilation of the changed unit of code (depending on the language. in pike it's at the class/object level)
process-like isolation barriers isolating small pieces of programs, and an object database-like datastore that can never be corrupted due to transactional changes (which can be rolled back, enabling stuff like time-travel debugging).
doesn't smalltalk do pretty much that? i'd be really interested in learning how your idea differs. you may also want to look at societyserver/open-Team: https://news.ycombinator.com/item?id=42159045
it's a platform written in pike that implements an object storage, and allows code objects in that to be modified at runtime. transactions are at the object/class level. (if the class fails to compile, the objects are not replaced). it stores versions of classes so a rollback is possible, although not implemented in the interface. (means right now, if i want an older version i have to rollback manually)
Such a system would allow you to fearlessly change any part of the source code of a running application at runtime - even if you mess up the code of a said component - say event to a point that it doesn't compile - all that would happen would that single component would cease to function without affecting the rest of the app.
smalltalk does that, as does societyserver/open-Team, or the roxen webapplication server (also written in pike) and i am pretty sure some lisp and erlang systems do as well.
As for smalltalk, I am also not intimately familiar with the language, but what I have in mind is somewhat lower level, with emphasis on C-like struct layouts stored in a POD way (so raw structs inside arrays and the like).
I'd say a key difference is in my language (working name Dream (because I started the project as my 'dream' language, and picking names is hard)), is that these isolation contexts are explicit, and you pointers can't really cross them.
There are special 'far' pointers that do have the ability to reference external objects in different context, but there's an explicit unwrap operation that needs to happen that can fail, as that object is not guaranteed to be reachable for whatever reason. Processes can be explicitly deleted, meaning all reference operations to them will fail.
To be clear, when i say process, i mean my lightweight internal isolation thing.
So in summary, my langage is procedural inside processes, with in-process garbage-collection, C-like performance and explicit method calls. Between processes, you either have smalltalk-like signals, or you can do Rust-style borrows, where you can access objects inside the process for the duration of a method call.
It has erlang-like 'just let it crash' philosophy, but again is a C-like procedural language (or shall I say Go-like, since it has total memory safety and GC).
It also has familiar C-like syntax, and quite a small(ish) feature set outside of the core stuff.
I have a huge doc written up on it, no idea if it would work and if it did, it would be useful, but I do have some tentative confidence in it.
(Also no claims on being original or inventive.)
pike/roxen had a brief window of growth in the 90s but the leaders at the roxen company (not the devs) missed the opportunity to work with the FOSS community.
pike is fully C-syntax, and it is very performant, so that may be interesting for you.
societyserver is my fork/continuation of a university project called open-sTeam that stopped development more than a decade ago. i continue to use it and when i am not busy earning money try to work on it, but i haven't yet been able to build a community around it.
the process isolation you talk about sounds like something that erlang promises as well, but i don't know enough about erlang to tell. i'd be curious to learn more though.
open-sTeam/societyserver built an object-level access control system. method calls on others object are being intercepted and only allowed to pass if the caller has the necessary permission to access that object.
it's not process isolation, but also a concept i find interesting
Quite easy to create a GUI that's interactive AND a console you can script on at the same time to inspect / edit / change code.
For example, don't like your window attributes? Write code to destroy it, and re-create it and keep your "live" data unchanged, and it will redisplay in the new style / layout.
And sure, you could code up atomic transactions quite easily.
Itcl even lets you create / add /remove classes or specific class instances on the fly, or redefine class methods.
https://www.jetbrains.com/help/idea/altering-the-program-s-e... -> https://www.jetbrains.com/help/idea/pro-tips.html#drop-frame
1: pedantically, you're recompiling the whole class, but usually it's only one method changing at a time unless things are really going bananas
DCEVM (RIP) allowed swapping the method signature, too, but that is a lot more tricky to use effectively during debugging (e.g. popping the stack frame doesn't magically change the callsite so if you added extra params it's not going to end well) e.g. https://github.com/TravaOpenJDK/trava-jdk-11-dcevm#trava-jdk...
Tcl isn't as widely known and used as it deserves to be. I think that's in part due to its syntax being sufficiently different from "mainstream" languages. The learning curve isn't particularly steep, but enough so that developers question whether it's worth the effort to go there.
FWIW Tcl 9.0 has recently been released. The language has been enriched with sophisticated object-oriented capabilities, coroutines, full math tower, etc. It's also rather easy to write extensions in C.
Anyway, the GUI toolkit (Tk) has been "borrowed" by many other languages (e.g., Python's tkinter), so quite a few programmers use TclTk, know it not.
Unfortunately I think that while there’s a decent number of power users and people who have the aptitude to become power users who will make use of software made to be deeply customizable, they are outstripped many times over by people who don’t see software that way and have no interest in learning about it. People are quick to point fingers about why the situation is as it is, but the truth is that it was always going to be this way once computers became widely adopted. It’s no different from how most people who drive cars can’t work on them and why few feel comfortable making modifications to their houses/apartments. There’s just a hard limit to the scope and depth of the average individual's attention, and more often than not technical specialization doesn’t make the cut. No amount of gentle ramping will work around this.
That doesn’t mean we shouldn’t build flexible software… by all means, please do, but I wouldn’t expect it to unseat the Microsofts and Googles of the world any time soon. I do however think that technically capable people should do anything they can to further the development of not just flexible, but local-first, hackable software. Anything that’s hard-tethered to a server should be out of the running entirely and something you can keep running on your machine regardless of the fate of its developer should take priority over more ephemeral options.
What if instead of cars and driving, we use reading and writing as the metaphor for the kind of media/utility computing can have. I'd argue it them changes the whole nature of the argument.
If we’re looking for levers to pull to help more people become advanced computer users, I believe progressive disclosure combined with design that takes advantage of natural human inclinations (association, spatial memory, etc) are much more powerful. Some of the most effective power users I’ve come across weren’t “tech people” but instead those who’d used iMac for 5-10 years doing photography or audio editing or whatever and had picked up all of the little productivity boosters scattered around the system ready for the user to discover at just the right time.
With that in mind, I think the biggest contributor to reduced computer literacy is actually the direction software design has taken in the past 10-15 years, where proper UI designers have been replaced with anybody who can cobble a mockup together in photoshop, resulting in vast amount of research being thrown out in favor of dribbble trends and vibes. The result is UI that isn’t humanist, doesn’t care to help the user grow, and is made only with looking pretty in slideshows and marketing copy in mind.
The average person is also a crappy writer, bad musician and lousy carpenter. But a notepad and a pen don’t tell me how to use them. They don’t limit my creative capacity. Same story with a piano, or a hammer and chisel. I wish computers were more like that.
Your point stands. Most notebook users never use it to write a bestselling novel, or draw like Picasso. But the invitation to try is still in the medium somehow. Just waiting for the right hand.
I agree with the rest of your comment. As software engineers, we could build any software we want for ourselves. It’s telling that we choose to use tools like git and IntelliJ. Stuff that takes months or years to master. I think it’s weirdly perverted to imagine the best software for everyone else is maximally dumbed down. Thats not what users want.
Rather than aiming for “software that is easy to use” I think we should be aiming for “software that rewards you for learning”. At least, in creative work. I’m personally far more interested in making the software equivalent of piano than I am in making the software equivalent of a television set.
Still working on the UX a little but it seems close to what you want (and I agree). The vision statement is about creating immortal software exactly to fight bitrot https://github.com/tomlarkworthy/lopecode
I've been to hotel rooms that looked identical to each other. I've never been to anybody's long-term home that wasn't unique—and unique in obvious, personalized ways. Even the most regularized housing ends up unique: I've visited everything from US dorm rooms to ex-Soviet housing blocks to cookie-cutter HOA-invested suburbs and yet, rules and norms aside, folks' private spaces were always unique, adapted through both conscious action and by unconscious day-to-day habits.
Just because 90% of these modifications did not need more DIY tools than the occasional hammer and nail does not mean they don't "count". That just shows that reducing friction, risk and skill requirements matters.
Gentle ramping helps in two ways. For people who would be inclined to get into more "advanced" modifications, it lowers the activation energy needed and makes it easier to learn the necessary skills. But even for people who would not be inclined to go "all the way", it still helps them make more involved modifications than they would otherwise. A system with natural affordances to adaptation lets people make the changes they want with less thought and attention than they would otherwise need—the design of the system itself takes on some of the cognitive load for them.
With physical objects like home furniture, the affordances stem from the physical nature of the item and the environment. With software, the affordances—or lack thereof—stem entirely from the software's design.
Mainstream software systems are clearly not designed to be adaptable, but we should not take this as a signal about human nature. Large, quasi-monopolistic companies are driven by scalability, legibility and control far more than user empowerment or adaptability. And most people get stuck with these systems less because they prefer the design and more because there are structural and legal obstacles to switching. The obstacles are surmountable—you can absolutely use a customizing Linux desktop day-to-day, I do!—but they add real friction. And, as we repeatedly see through both research and observation, friction makes a big difference to most people. Friction has an outsize impact not because of people's immutable preferences but, as you said, because people have finite pools of time and attention with too many demands to do everything.
Great to see them pushing work like this, building experiments, and talking about what they’ve learned.
What milestones would you like to hit before open-sourcing it? As an outsider, it looks like it has a LOT of features, and I wonder if there's feature creep. Still, version control for everything is a tall order, so perhaps it needs plenty of time to bake.
To answer your question: although we use Patchwork every day, it’s currently very rough around the edges. The SDK for building stuff needs refinement (and SDKs are hard to change later…) Reliability and performance need improvement, in coordination with work on Automerge. We also plan to have more alpha users outside our lab before a broader release, to work through some of these issues.
In short, we feel that it’s promising and headed in a good direction, but it’s not there yet.
Most people just don’t have the skills or inclination to tinker even with ham radios or cars.
On the other hand with the right to repair, you could call a repairman. And now — an agent or robot!!
I get it too, world moved on, people have to manage APIs, updates... but yeah.
In UNIX systems you can use pipes between programs (if the programs support that; many modern programs don't support it very well), although there are still problems with that too. (I also disagree with the idea that text (especially Unicode text, although the objections apply even without a specific character set) would be the universal format.)
My idea of a computer design and operating system design is intended to do things which will avoid the problems mentioned there (although this does not avoid needing actually good programming, and such things as FOSS etc still have benefits), as well as having other benefits.
Some of the features of my design are: CAQL (Command, Automation, and Query Language), UTLV (Universal Type/Length/Value), and proxy capabilities. (There are more (e.g. multiple locking and transactions), but these will be relevant for this discussion.)
Like OpenDoc and OLE, you can include other kind of things inside of any UTLV file, by the use of the UTLV "Extension" type. The contents of the extension would usually itself be UTLV as well, allowing the parts to be manipulated like others are, although even if the contents isn't UTLV (e.g. for raster images), you would have functions to convert them and to deal with them anyways, so it will still work anyways.
With those things in combination with the accessibility (one of the principles is that accessibility features are for everyone, not only for the people with disabilities; among other things this means that it does not use a separate "accessibility" menu) and m17n and other features, you can also do such things as affect colours, fonts, etc, without much difficulty. (They might not seem related at first, but they are related.)
I had also recently seen https://malleable.systems/mission/ which seems to be related (you might want to read this document even if you are not interested in my own comments). One part says, "If I want to grab a UI control from one application, some processing logic from another, and run it all against a data source from somewhere else again, it should be possible to do so.", and with CAQL and UTLV and proxy capabilities, this can be done easily, because the UI controls are callable objects (which can be used with CAQL) like any other one, the data source can use UTLV (which can be queried and altered by CAQL), and the interaction between them can use proxy capabilities.
Another reference I usually bring up is Alan Kay's talk on smalltalk: https://www.youtube.com/watch?v=AnrlSqtpOkw&t=4m19s
My related comments on this, just to show other stories along this theme:
- https://news.ycombinator.com/item?id=36885940
FOSS also helps, but just because it is FOSS does not itself help (and is mentioned in the article), but it is one of the things to be done, too.
UNIX programs with pipes is also one thing that helps, but it is not quite perfectly. Nevertheless, writing programs that do this when working with UNIX systems, is helpful to do. (For working with picture files, I almost entirely use programs that I wrote myself which use farbfeld, and use pipes to combine them; I will then convert to PNG or other formats when writing to disk (I do not use farbfeld as a format to store pictures on disk, but only as the intermediate format to use with pipes).)
For those who don’t know, Delphi was (is?) a visual constructor for Windows apps that you outfitted with a dialect of Pascal. It was effing magic!
Nowadays the web ecosystem is so fast-paced and so fragmented, the choice is paralyzing, confidence is low. The amount of scaffolding I have to do is insane. There are tools, yes, cookie cutters, npx’s, CRAs, copilots and Cursors that will confidently spew tons of code but quickly leave you alone with this mess.
I haven’t found a solution yet.
https://news.ycombinator.com/item?id=43913414
A quick search yielded:
https://wiki.freepascal.org/Developing_Web_Apps_with_Pascal
and
https://www.reddit.com/r/pascal/comments/es8wlh/free_pascal_...
GDScript is pretty similar in feel to Python, and you can also use C# if you want to. It has some level of GUI controls in the framework (not sure how many yet, but all of the GUI controls used to build the editor are available for use).
I want to believe the 3d capabilities might be useful for some kind of UI stuff, but I don't really have a real idea how to make that work - just a "wouldn't it be neat if..." question about it right now.
- Makerkit Next.js/Supabase Starter Kit
- Python backend processing
- BMAD framework for building specification
- Claude Code with Max subscription
- Cursor for in IDE adjustments
I have managed to make some pretty incredible tools, it definitely feels like magic.
I would say I split my time 70% using BMAD as an assistant to build out my scope and clarify what I am trying to do in my own head, then 30% supervising Claude Code.
I have also managed to build out more simple tools using Streamlit to great effect
How can we draw apps into using a common data backend owned by the user?
I love napari. I remember downloading it on a whim, and while poking around, I accidentally opened its built-in python console. Half the time, if I'm writing a plugin for it, I open up the console just so that I can play around and print out stuff and try new things.
Everything, even the viewer itself, is accessible from the repl. Nothing hides behind a black box.
I've also patched open-source programs locally in order to get them to do what I want but wouldn't be suitable for upstreaming. For example, I've reverted the order of buttons in a "do you want to save?" close dialog when they changed in an update.
Minor stuff, but just being able to do this is amazing. The trouble is, developers - at least those of closed-source programs - don't want you to be able to do that, partially due to a lot of them relying on security by obscurity in order to earn money.
As such, it feels like the only way you're going to get developers to be on board with something like this is to be able to have them specify what people can change and what people can't change - and that's something that developers already do (whether they realise it or not) with things like INI files and the Registry.
This is why people using UNIX-based systems campaign for small programs that do one thing and do it well. Being able to combine these small programs into a pipeline that does exactly what you want? Now that's amazing.
You can tell when a platform is succeeding at this by looking at its adoption among non-programmers.
- In-process and cross-process language-agnostic API bindings.
- Elaborate support for marshalling objects and complex datastructures across process boundaries.
- Standardized ways to declare application and document object models that can be used by external applications, or internally by application add-ons.
- A standardized distribution system for application extensions, including opportunities for monetization.
- Standardized tools for binding application APIs to web services and database services.
- A default scripting engine (VBA) that can be embedded into applications.
- Admittedly primitive and mostly ill-advised support for dynamically typed objects, and composable objects.
And it provides opportunities for all of the levels of application customization you seem to be looking for.
- Trivial tiny customizations using in-app VBA.
- The ability to extend your application's behavior using addons downloadable from a marketplace WITHOUT trying to capture a percentage of licensing revenue from those who want to monetize their add-ons.
- The ability to write scripts that move data between the published document object models of various applications (and a variety of standard data formats).
- The ability to write fully custom code that lives within applications and interacts with the UI and with live documents within those application (i.e. write-your-own add-ons).
Plus it would be enormously fun to build the equivalent functionality of COM/OLE with the all the benefits of hindsight, and none of the cruft incurred by Visual Basic, with lessons in hand from some of the things COM didn't do well. (svg as a graphics transport, perhaps? A more organized arrangement of threading model options? Support for asynchronous methods? A standardized event mechanism?)
Questions that come to mind:
- What can you get away with not doing that COM does do? Not much, I think.
- How could you make it better? A bunch of ways!
I really love those customization power charts and really happy to see that my anecdote-based thoughts might actually have some grounding behind them.
https://news.ycombinator.com/item?id=44236729
Ultimately I think it’s too open ended. Users got overwhelmed with a chat interface and couldn’t think of something useful to build on the spot.
Maybe a slow burn approach works best.
The gaming audience is probably the most demanding of any regarding customization, modding, accessibility and other similar principles -- when the market forces line up and they are flush enough to offer more malleability video games frequently do.
But... I think a lot of it already is customizable, and users don't want to configure. End-users (or doctors) hate having to learn more about software than they absolutely must. Just an example, Epic (EHR from the essay) definitely has the ability to mark fields as optional/required. Someone just needs to get in and do it, and they don't want to/know how.
The inaccessibility of config to laypeople may actually be where AI shines. You prompt an in-app modal to change X to Y, and it applies the change. A natural language interface to malleability.
gjsman-1000•1d ago
> "The original promise of personal computing was a new kind of clay—a malleable material that users could reshape at will. Instead, we got appliances: built far away, sealed, unchangeable. When your tools don’t work the way you need them to, you submit feedback and hope for the best. You’re forced to adapt your workflow to fit your software, when it should be the other way around."
I already have objections: User and businesses overwhelmingly voted with their wallets that they want appliances. The big evil megacorps didn't convince them of this - Windows was a wildly malleable piece of software in the 90s and 2000s, and it didn't exactly win love for it. The Nintendo Switch sold 152 million units, the malleable Steam Deck hasn't broken 6.
Software that isn't malleable is easier to develop, easier to train for, easier to answer support questions for, and frequently cheaper. Most users find training for what's off-the-shelf already difficult - customizing it is something that only a few percent would even consider, let alone do. Pity the IT Department that then has to answer questions about their customizations when they go wrong - user customizations can easily become their own kind of "shadow IT."
The send off is also not reassuring:
> "When the people living or working in a space gradually evolve their tools to meet their needs, the result is a special kind of quality. While malleable software may lack the design consistency of artifacts crafted behind closed doors in Palo Alto, we find that over time it develops the kind of charm of an old house. It bears witness to past uses and carries traces of its past decisions, even as it evolves to meet the needs of the day."
If you think this is okay, we've already lost. People simply will not go back to clunky software of the 2000s, regardless of the malleability or usability.
gklitt•1d ago
You make a fair point! Ease of use matters. We all want premade experiences some of the time. The problem is that even in those (perhaps rare!) cases where we want to tweak something, even a tiny thing, we’re out of luck.
An analogy: we all want to order a pizza sometime. But at the same time, a world with only food courts and no kitchens wouldn’t be ideal. That’s how software feels today—-the “kitchen” is missing.
Also, you may be right in the short term. But in the long run, our tools also shape our culture. If software makes people feel more empowered, I believe that’ll eventually change people’s preferences.
gjsman-1000•1d ago
For something as complex as software, it's sad, but it's almost... okay? Every industry has gone through this; there was a time when cars were experimental and hand-assembled. Imagine if Henry Ford in the 1920s had focused on democratizing car parts so anyone can build their own car with thousands of potential combinations; I don't think it would have worked out. It is still true that you can, technically speaking, build your own car; but nobody pretends that we can turn everyone into personalized car builders if we just try hard enough.
gklitt•1d ago
On that note, Robin Sloan has a beautiful post about software as a home cooked meal…
https://www.robinsloan.com/notes/home-cooked-app/
That said, I think talking about cars may be stronger ground for the argument you’re making. Mass production is incredible at making cheap uniform goods. This applies even more in software, where marginal costs are so low.
The point of our essay, though, is that the uniformity of mass produced goods can hinder people when there’s no ability to tweak or customize at all. I’m not a car guy, but it seems like cars have reasonably modular parts you can replace (like the tires) and I believe some people do deeper aftermarket mods as well. In software, too often you can’t even make the tiniest change. It’s as if everyone had to agree on the same tires, and you needed to ask the original manufacturer to change the tires for you!
xg15•1d ago
Since the last decade or so at the latest, software is often designed as an explicit means of power over users and applications are made deliberately inflexible to, e.g. corece users to watch ads, purchase goods or services or simply stay at the screen for longer than intended.
(Even that was already the case in niches, especially "shareware". But in a sense, all commercial software is shareware now)
bravesoul2•1d ago
I am a bit fed up with software less because of malleablity but because of the cloud walled gardens. I can't open my Google doc in something else like I can a pdf in different programs. Not without exporting it.
This for me interested and I found remotestorage.io which looks very promising. I like the idea that I buy my 100gb of could storage from wherever then compose the apps I want to use around it.
I hadn't thought of malleable software... that's a whole other dimension! Thanks for introducing this as a concept worth talking about. Of course I have heard of elisp and used excel but haven't thought of it front and centre.
In terms of cooking ... I feel like cooking is easier potentially as for the most part (some exceptions) if I know the food hygiene and how to cook stuff then it is an additive process. Chicken plus curry plus rice. Software is like this too until it isn't. The excel docs do a great simple budget but not a full accounting suite. With the latter you get bogged down in fixing bugs in the sheet as you try to use it.
I think it is good you are researching as these could be solvable problems probably for many cases.
Something I have always thought about is sometimes it matter less if the software is open source than if the file format is. Then people can extend by building more around the file format. A tool might work on part of the format where an app works on all of it. I use free tools to sign PDFs for example.
jcynix•1d ago
People want to create, but need tools to make this easier / more abstract than regular programming. Most companies want to get them into their walled gardens instead, especially web-based companies today.
rpearl•1d ago
danhite•1d ago
> That’s how software feels today—-the “kitchen” is missing.
I believe you'll want to read this essay which appeared in the Spring 1990 issue of Market Process, a publication of the Center for the Study of Market Processes at George Mason University ...
"An Inquiry into the Nature and Causes of the Wealth of Kitchens" by Phil Salin
Having worked for him, I'd say his wikipedia entry doesn't do him justice, but is a good start if you're curious--like your Ink & Switch group he spent many years trying to create a world changing software/platform [AMIX , sister co. to Xanadu, both funded in the 1990s by Autodesk].
http://www.philsalin.com/kitchens/index.html#:~:text=An%20In...
conartist6•17h ago
I'm really curious to see how the overlap with BABLR plays out. In many ways we're doing the same experiments in parallel: we're both working on systems that have a natural tendency to become their own version control, and which try to say what the data is without prejudice as to how it might be presented.
In particular BABLR thinks it can narrow and close the ease-of-use gap between "wire up blocks" style programming and "write syntax out left to right" style programming by making a programming environment that lets you wire up syntax tree nodes as blocks.
It's still quite rough, but we have a demo that shows off how we can simplify the code editing UX down to the point where you can do it on a phone screen:
https://paned.it/
Try tapping a syntax node in the example code to select that node. Then you can tap-drag the selected (blue) node and drop it into any gap (gray square). The intent is to ensure that you can construct incomplete structures, but never outright invalid ones.
xg15•1d ago
Is that so? I remember the custom styling options in Win98 and ME/2000 still very fondly. And there were lots of people who invested effort in making their own color schemes, meticulously assembling personal toolbars in Office, etc. (The enthusiasm went away the first time you had to reinstall and were faced with the choice of doing it all again or sticking with the defaults. But I'd chalk this up to Windows not treating the customization data as important enough to provide backup/export functionality, not that people didn't want to customize)
The features increasingly went away in later Windows and Office versions, but I assumed it was some corporate decision. Was there ever actual backlash from users against those features?
RiverCrochet•1d ago
Non tech-oriented people, the masses, absolutely love customizability and malleability--but aren't willing to handle the responsibility. They will reach out to tech support who can't possibly know every customization option of every application and its effects, and complain when they tell them to reset/reinstall.
And in a corporate environment where the company provides the PC, the company would rather not deal with it. Office dominates at the workplace, is mostly making money from corporate users, and users want it to behave the same way it does in the workplace. So any backlash by users is simply not going to matter unless it might cause companies to not renew their licenses.
A company I work for is moving to Office-on-the-web for PCs that are used by people who don't really use Office that much except possibly to read Word docs, in order to save on licensing costs I presume. It's even less customizable than any desktop version. So the trend is going to continue.
xg15•1d ago
Bjartr•1d ago
Jtsummers•1d ago
Bjartr•1d ago
Jtsummers•1d ago
Bjartr•1d ago
Even if this specific example is flawed, non-technical users can and do end up in similar non-sensical situations that require a call to support to sort out. The more customization that's possible, the more complicated those calls can get. (Think of the support guy that has to figure out that Grandma's Windows Home setup has custom group policy settings that her well-meaning grandson setup to make things simpler for her by hiding this or that, and now she can't follow the tech's instructions that work for 99.9% of users)
Not only that, but they do so enough that the added cost to field those support calls is enough for companies to change their products to reduce their likelihood.
Almost no-one on this forum falls into the category of user I'm describing. And this kind of user is one of the most common for general consumer software. There is a real cost burden to supporting software with configurability.
And when this kind of thing gets messed up, do users go "Oops! My bad!"? No, they go "This software sucks, I'm going to use <competitor> instead where this kind of thing never happens!"
aspenmayer•1d ago
I can’t count how many people I helped to regain access to their computer login because of losing access to the method used to receive 2FA codes for Microsoft accounts, which is necessary to login if you have forgotten your password. The Microsoft account user setup won’t let you make a password-free login unless you use a local account, and short easily guessable passwords don’t meet their online account security requirements. Most people probably don’t want a Microsoft account if it has this failure mode, but people don't know the trade offs at the time of user account setup, and Microsoft uses that ignorance as leverage to get people signed into everything so that you will have have opted-in to all of this. It’s such an own-goal by Microsoft and it makes me feel for users who have no idea how any of this works. It’s a hard problem to solve, I’m sure, but it shouldn’t be like this.
The people who are most disadvantaged by the high tech highly secure thrust of modern tech are those who have the least skills with technology. Low skill users are also most at risk for scams and malware and other kinds of tactics, so I don’t mean to say that having no password is good. Having no password is a bad solution to the problem of computers being hard to use for many people, and they don’t know what they don’t know, so anything that they haven’t seen before is a cause for concern or alarm to their mind. Since most people have forgotten that they even have a Microsoft account by the time they have trouble logging in to their computer using one, they click around until they get to the account recovery, and then usually get their account locked because they can’t solve the security challenges that they never faced before or anticipated when doing the initial setup perhaps years prior.
zzo38computer•1d ago
RiverCrochet•1d ago
jrapdx3•1d ago
Users can be fearful of "messing it up" if they change defaults. Making changes necessarily confers responsibility to follow instructions, learn how to alter settings and know the set of options that are appropriate to change and which are not.
immibis•21h ago
conartist6•18h ago
If you split the support costs between many members of a community though, you don't need to fear customization. Then, ideally, the users who are most alike will support each other, the same way you can get a degree of support for some particular flavor of Linux by seeking out other people who use that flavor (or another one that's enough like it)
Backlash will be in the form of working, competing software maintained by communities, precisely because this is the only form of backlash that might cause companies not to renew their licenses.
bigstrat2003•1d ago
Software in the 2000s was markedly better to software today. But it's cheaper and easier for companies to produce shitty software, so that's what we get. It has nothing to do with consumer preference.
selfhoster11•1d ago