> 2. Restrictions. Except as expressly specified in this Agreement, you may not: (a) transfer, sublicense, lease, lend, rent or otherwise distribute the Software or Derivative Works to any third party; or (b) make the functionality of the Software or Derivative Works available to multiple users through any means, including, but not limited to, by uploading the Software to a network or file-sharing service or through any hosting, application services provider, service bureau, software-as-a-service (SaaS) or any other type of services. You acknowledge and agree that portions of the Software, including, but not limited to, the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets of Museum and its licensors.
Edit: Disappointed is really not the right word but I am failing at finding the right word.
1) these historical source code releases really are largely historical interest only. The original programs had constraints of memory and cpu speed that no modern use case does; the set of use cases for any particular task today is very different; what users expect and will tolerate in UI has shifted; available programming languages and tooling today are much better than the pragmatic options of decades past. If you were trying to build a Unix clone today there is no way you would want to start with the historical release of sixth edition. Even xv6 is only "inspired by" it, and gets away with that because of its teaching focus. Similarly if you wanted to build some kind of "streamlined lightweight photoshop-alike" then starting from scratch would be more sensible than starting with somebody else's legacy codebase.
2) In this specific case the licence agreement explicitly forbids basically any kind of "running with it" -- you cannot distribute any derivative work. So it's not surprising that nobody has done that.
I think Doom and similar old games are one of the few counterexamples, where people find value in being able to run the specific artefact on new platforms.
> When will we get the linux port of Photoshop 1.0?
I think Adobe decided to release the code because they knew it was only valuable from a historical standpoint and wouldn't let anyone actually compete with Photoshop. If you wanted to start a new image editor project from an existing codebase, it would be much easier to build off of something like Pinta: https://www.pinta-project.com/
Your disappointment seems to be a form of FOMO, but there isn't actually anything that you're MO here.
My personal thoughts are: open-source software is great, probably the ideal condition, but I wish the general software distribution environment was not effectively all or nothing. open-source or compiled binary. I wish that protected-source software was considered a more valid distribution model. where you can compile, inspect fix and run the software but are not allowed to distribute it. Because trying to diagnose a problem when all you have is a compilation artifact is a huge pain. You see some enterprise software like this but for the most part it either open-source or no-source.
I am a bit surprised that there is no third party patch to get photoshop 1.0 to run under modern linux or windows, not for any real utility(at this point MS paint probably has better functionality), but for the fun of it. "This is what it feels like to drive photoshop 1"
Words have meaning and all that.
* If a country doesn't have "closed borders" then many foreigners can visit if they follow certain rules around visas, purpose, and length of stay. If instead anyone can enter and live there with minimal restrictions we say it has "open borders".
* If a journal isn't "closed access" it is free to read. If you additionally have permissions to redistribute, reuse, etc then it's "open access".
* If an organization doesn't practice "closed meetings" then outsiders can attend meetings to observe. If it additionally provides advance notice, allows public attendance without permission, and records or publishes minutes, then it has “open meetings.”
* A club that doesn't have "closed membership" is open to admitting members. Anyone can join provided they meet relevant criteria (if any) then it's "open membership".
EDIT: expanded this into a post: https://www.jefftk.com/p/open-source-is-a-normal-term
* A set that is open can also be closed.
And that has nothing to do with whether someone can be "blamed" for ignoring the actual meaning of a term with a formal definition.
Ironic put down when “open source” consists of two words which have meaning, but somehow doesn’t mean that when combined into one phrase.
Same with free software, in a way.
Programmers really are terrible at naming things.
:)
The fact is that your claim "“open source” consists of two words which have meaning, but somehow doesn’t mean ==>that<== when combined into one phrase" is simply false, as there is no "that".
> Same with free software, in a way.
This is a much more supportable argument, but note the change in wording: "free software" is not the same as "free source". The latter suggests that one doesn't have to pay for the source, but says nothing about what one can do with the source or one's rights to software built from that source.
As for "free [as in freedom] software", I think there would have been less contention if RMS/FSF had called it "freed software" or "liberated software", and it would have been more consistent with their stated goals.
> Programmers really are terrible at naming things.
This is silly sophism based on one anecdote that you didn't even get right. Naming things well is hard, and names in software have conditions that don't exist in more casual circumstances. The reality is that good programmers put a lot of effort into choosing names and generally are better at it than the population at large.
You're welcome to think what you want, but I've had to explain to enough juniors enough times what "open" actually means, so I know what people without any preconceived notions think it means, vs what experts on HN associate with the word after decades in the industry.
People who are new to the profession entirely, think that "open" means "you can look inside." Source: my life, unfortunately.
> ... that you didn't even get right.
FYI: this style of conversation won't get anyone to listen to you. And FWIW I was referencing the quip which I'm sure your familiar with. It was tongue in cheek.
> The reality is that good programmers put a lot of effort into choosing names and generally are better at it than the population at large.
... isn't that a No True Scotsman?
How big of you.
> I've had to explain to enough juniors enough times what "open" actually means, so I know what people without any preconceived notions think it means, vs what experts on HN associate with the word after decades in the industry.
This is not relevant--it addresses a strawman and deflects from the actual claim you made and that I disputed.
> FYI: this style of conversation won't get anyone to listen to you.
Projection. I will in fact cease to respond to you.
> ... isn't that a No True Scotsman?
Obviously not. Failing to understand the difference between "real", "actual", "true" etc. which are the essence of the fallacy and valid qualifiers like "good" shows a fundamental failure to understand the point of the fallacy.
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
This makes the license transitive so that derived works are also MIT licensed.
[1] https://en.wikipedia.org/wiki/MIT_License?wprov=sfti1#Licens...
AGPL and GPL are, on the other hand, as you describe.
You also could not legally remove the MIT license from those files and distribute with all rights reserved. My original granting of permission to modify and redistribute continues downstream.
*: which unfortunately most users of MIT libraries do not follow as I often have an extremely difficult time finding the OSS licenses in their software distributions
On the contrary: https://opensource.org/osd
Need more of a citation to understand that..?
> Need more of a citation to understand that..?
This nonsense is not at all relevant to the claim for which I asked for a citation: "No, the original definition of open-source is source code that is visible (open) to the public."
No support for that claim has ever been offered.
https://fsck.technology/software/Silicon%20Graphics/Software...
It nailed it, first try.
I cannot, unfortunately, share a link to the website it created because of the license.
LLM translations of historical software to modern platforms is a solved problem. Try it, you'll see.
I used https://exe.dev/ and their Shelley agent to drive Claude. Give it a try, it is jaw dropping.
Here is the prompt I gave it:
"Use wasm and go and a 68000 emulator to get the Photoshop 1.0.1 software at https://d1yx3ys82bpsa0.cloudfront.net/source/photoshop-v.1.0... to run correctly. You should not require an operating system, instead implement the system calls that Photoshop makes in the context of wasm. Because Go compiles to wasm, you might try writing some kind of translator from the pascal to go and then compile for wasm. Or you might be able to find such a thing and use it."
You can give it a try yourself, or contact me for a private link to it (see the CHM license for why I can't make it public).
When a filter is implemented half in Pascal (setup and loop over the rows) and half in assembly (each row) Claude did it all in Go, but the structure in Go is the same: one entry point for setup and iterating on rows, and one function (ported from the assembly) to process each row.
(As for the resource fork, it just reimplemented the UI in HTML. There's not enough info in the transcript of it's thinking to know if it read the resource file and understood it, or if it used a general understanding of what was in Photoshop, from training data, to do it.)
My mind is blown. I keep trying to find evidence that it just copied this from someplace, but I can't see how.
Just as an experiment, I fed the resource fork to GPT-5.2 to see whether it could render the windows/dialogs in the resource fork - it did a fairly okay job. I think the fundamental limit it ran against (and acknowledges) is that lot of Mac's classic look and feel was defined programmatically, literally, down to calls to RoundRect(...).
https://chatgpt.com/s/t_694dddb290308191babcb07a72367e97
Thanks for posting your experience.
And, for purity/completeness, avoid Maxx Desktop and/or NSCDE; EMWM with XMToolbar it's close enough to SGI's Irix desktop.
Just supporting a modern OS's graphical API (The pre-OSX APIs are long dead and unsupported) is a major effort.
I think all floppies are magical :)
Back in time, black were ordinary, and only white/grey ones were for licensed software, thus more desirable.
https://computerhistory.org/wp-content/uploads/2019/08/photo...
E.g: https://c7.alamy.com/comp/2AA9BC4/ajaxnetphoto-2019-worthing...
It wasn't even broadband that destroyed that experience, when CDs came around developers realised they had space to just stick a PDF version of the manual on the CD itself and put in a slip that tells you to stick in the CD, run autorun.exe if it didn't already, and refer to the manual on the CD for the rest!
The Office 4.3 set of manuals were large too, but didn't have the information density the AutoCAD ones did.
Even some well-documented modern software is obviously documented by the programmers and programmer-adjacent.
They weren’t like textbooks, which have knowledge that tends to be relevant for decades. You’d get a new set with every software release, making the last 5-20 lbs of manuals obsolete.
You did lose some of the readability of an actual book. Hard-copy manuals were better for that. But for most software manuals, I did more “look up how to do this thing” than reading straight through. And with a pdf on a CD you had much better search capabilities. Before that you’d have to rely on the ToC, the book index and your own notes. For many manuals, the index wasn’t great. Full text search was a definite step up.
Even the good ones, like the 1980s IBM 2-ring binder manuals, which had good indexes, were a pain to deal with and couldn’t functionally match a PDF or text file on a CD for searchability.
You might expect now and again to get some optional updates/patches later, but that was rare - and rarer still for most people to even know about them.
These days, software is never complete. Nothing is done. It's just a point-in-time state with a laundry list of bugs and TODOs that just roll out whenever. The software is just whatever git tag we're pointing to today.
I understand how/why it has become like this - but it still makes me sad.
OMG. Booch?? The father of UML is still around? Given that UML is a true crime against humanity, it just goes to show there is no justice in the world. (I want a lifespan refund for the amount of time I spent learning UML and Design Patterns back in the bad old Enterprise Java days. Oof)
For trivial CRUD apps, and maintaining modified versions of the generated code was a nightmare.
It is also a great way to document existing architectures.
It is like the YAML junk that gets pushed nowadays in detriment of proper schemas, and validation tools we have in XML.
"There are only a few comments in the version 1.0 source code, most of which are associated with assembly language snippets. That said, the lack of comments is simply not an issue. This code is so literate, so easy to read, that comments might even have gotten in the way."
"This is the kind of code I aspire to write.”
I'm looking at the code and just cannot agree. If I look at a command like "TRotateFloatCommand.DoIt" in URotate.p, it's 200 lines long without a single comment. I look at a section like this and there's nothing literate about it. I have no idea what it's doing or why at a glance:
pt.h := BSR (r.left + ORD4 (r.right), 1);
pt.v := BSR (r.top + ORD4 (r.bottom), 1);
pt.h := pt.h - BSR (width, 1);
pt.v := pt.v - BSR (height, 1);
pt.h := Max (0, Min (pt.h, fDoc.fCols - width));
pt.v := Max (0, Min (pt.v, fDoc.fRows - height));
IF width > fDoc.fCols THEN
pt.h := pt.h - BSR (width - fDoc.fCols - 1, 1);
IF height > fDoc.fRows THEN
pt.v := pt.v - BSR (height - fDoc.fRows - 1, 1);
Just breaking up the function with comments delineating its four main sections and what they do would be a start. As would simple things like commenting e.g. what purpose 'pt' serves -- the code block above is where it is first defined, but you can't guess what its purpose is until later when it's used to define something else.Good code does not make comments unnecessary or redundant or harmful. This is a myth that needs to die. Comments help you understand code much faster, understand the purpose of variables before they get used, understand the purpose of functions and parameters before reading the code that defines them, etc. They vastly aid in comprehension. And those are just "what" comments I'm talking about -- the additional necessity of "why" comments (why the code uses x approach instead of seemingly more obvious approach y or z, which were tried and failed) is a whole other subject.
For clarity and to demonstrate, this is basically what this function is doing, but in css:
.container {
position: relative;
}.obj {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
}Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion. It’s an excuse to name things poorly, because “good comment.” The purpose of variables should be in their name, including units if it’s a measurement. Parameters and return values should only be documented when not obvious from the name or type—for example, if you’re returning something like a generic Pair, especially if left and right have the same type. We’d been living with decades of autocomplete, you don’t need to make variables be short to type.
The problem with AI-generated code is that the myth that good code is thoroughly commented code is so pervasive, that the default output mode for generated code is to comment every darn line it generates. After all, in software education, they don’t deduct points for needless comments, and students think their code is now better w/ the comments, because they almost never teach writing good code. Usually you get kudos for extensive comments. And then you throw away your work. Computer science field is littered with math-formula-influenced space-saving one or two letter identifiers, barely with any recognizable semantic meaning.
This is exactly my view. Comments, while can be helpful, can also interrupt the reading of the code. Also are not verified by the compiler; curious, in the era when everyone goes crazy for rust safety, there is nothing unsafer as comments, because are completely ignored.
I do bot oppose to comments. But they should be used only when needed.
> comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion
This gets endlessly repeated, but it's just defending laziness. It's your job to update comments as you update code. Indeed, they're the first thing you should update. If you're letting comments "rot", then you're a bad programmer. Full stop. I hate to be harsh, but that's the reality. People who defend no comments are just saying, "I can't be bothered to make this code easier for others to understand and use". It's egotistical and selfish. The solution for confusing comments isn't no comments -- it's good comments. Do your job. Write code that others can read and maintain. And when you update code, start with the comments. It's just professionalism, pure and simple.
(Please note: I'm not arguing against comments. I'm simply arguing that trusting comments is problematic. It is understandable why some people would prefer to have clearly written code over clearly commented code.)
That doesn't justify matching their sloth.
Lead by example! Write comments half a page long or longer, explaining things, not just expanding identifier names by adding spaces in between the words.
That, and I have mixed feelings about commenting code. (Thankfully I don't managed developers. I simply exploit personally since it a skill that I have.) I understand why we do it. I especially appreciate well documented libraries and development tools. On the other hand, I fully understand that comments only work if they are written, read, and updated. The order is important here since documentation will only be updated if it is read and it will only be read if it is (well) written. Even then you are lucky if well written documentation is read.
The flip side is that comments are duplication. Duplication is fine if they are consistent with each other. In some respects, duplication is better since it offers multiple avenues for understanding. Yet there is also a high probability that they will get out of sync. Sometimes it is "intentional" (e.g. someone isn't doing their job by updating it). Sometimes it is "unintentional", since the interpretation of human languages is not as precise as the compiler's translation of source code into object code. (Which is a convoluted way of saying that sometimes comments are misinterpreted.)
I like to add myself as a mandatory reviewer of all PRs and then reject changes that don't come with some explanatory comment or fail to update comments.
Even if huge swaths of the codebase are undocumented boring boilerplate, you still have to draw the line somewhere, otherwise you get madness like ten pages of authentication and authorization spaghetti logic without a single descriptive comment.
I've worked at places (early on) that were basically cowboy coding -- zero code review, global variables everywhere, not a comment or test to be seen. Obviously you can't enforce good comments there.
And I've worked at places that were 100% professional -- design documents, full code review, proper design, tests, full comments and comments kept fully up-to-date just like code.
It's just the culture and professionalism. If proper comments are enforced through code review, they happen. Ultimately, the head of engineering just decides whether it's part of policy or not. It's not hard. It's just a top-down decision.
A name and signature is often not sufficient to describe what a function does, including any assumptions it makes about the inputs or guarantees it makes about the outputs.
That isn't to say that it isn't necessary to have good names, but that isn't enough. You need good comments too.
And if you say that all of that information should be in your names, you end up with very unwieldy names, that will bitrot even worse than comments, because instead of updating a single comment, you now have to update every usage of the variable or function.
ORD4 = cast as 32bit integer.
BSR(x,1) simply meant x divided by 2. This is very comment coding idom back in those days when Compiler don't do any optimization and bitwise-shift is much faster than division.
The snippet in C would be:
pt.h = (r.left + (int32_t)r.right) / 2;
pt.v = (r.top + (int32_t)r.bottom) / 2;
pt.h -= (width / 2);
pt.v -= (height / 2);
pt.h = max(0, min(pt.h, fDoc.fCols - width));
pt.v = max(0, min(pt.v, fDoc.fRows - height));
if (width > fDoc.fCols) {
pt.h -= (width - fDoc.fCols - 1) / 2;
}
if (height > fDoc.fRows) {
pt.v -= (height - fDoc.fRows - 1) / 2;
}If I understand it correctly, it was calculating the top-left point of the bounding box.
pt == point, r == rect, h, v == horizontal, vertical, BSR(...,1) is a fast integer divide by 2, ORD4 promotes an expression to an unsigned 4 byte integer
The algorithms are extremely common for 2D graphics programming. The first is to find the center of a 2D rectangle, the second offsets a point by half the size, the third clips a point to be in the range of a rectangle, and so on.
Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
(Mac Pascal didn't have macros or inline expressions, so inline expressions like this were the way to go for performance.)
It's like using i,j,k for loop indexes, or x,y,z for graphics axis.
There's no context in those names to help you understand them, you have to look at the code surrounding it. And even the most well-intentioned, small loops with obvious context right next to it can over time grow and add additional index counters until your obvious little index counter is utterly opaque without reading a dozen extra lines to understand it.
(And i and j? Which look so similar at a glance? Never. Never!)
> (And i and j? Which look so similar at a glance? Never. Never!)
This I agree with.
> There's no context in those names to help you understand them, you have to look at the code surrounding it.
Hard disagree. Using "meaningful" index names is a distracting anti-pattern, for the vast majority of loops. The index is a meaningless structural reference -- the standard names allow the programmer to (correctly) gloss over it. To bring the point home, such loops could often (in theory, if not in practice, depending on the language) be rewritten as maps, where the index reference vanishes altogether.
The issue isn't the names themselves, it's the locality of information. In a 3-deep nested loop, i, j, k forces the reader to maintain a mental stack trace of the entire block. If I have to scroll up to the for clause to remember which dimension k refers to, the abstraction has failed.
Meaningful names like row, col, cell transform structural boilerplate into self-documenting logic. ijk may be standard in math-heavy code, but in most production code bases, optimizing for a 'low-context' reader is not an anti-pattern.
That was my "vast majority" qualifier.
For most short or medium sized loops, though, renaming "i" to something "meaningful" can harm readability. And I don't buy the defensive programming argument that you should do it anyway because the loop "might grow bigger someday". If it does, you can consider updating the names then. It's not hard -- they're hyper local variables.
But once you nest three deep (as in the example that kicked off this thread), you're defining a coordinate space. Even in a 10-line block, i, j, k forces the reader to manually map those letters back to their axes. If I see grid[j][i][k], is that a bug or a deliberate transposition? I shouldn't have to look at the for clause to find out.
You seem to be missing my point. It's not about improving "clarity" about the math each line is doing -- that's precisely the kind of misconception so many people have about comments.
It's about, how long does it take me to understand the purpose of a block of code? If there was a simple comment at the top that said [1]:
# Calculate top-left point of the bounding box
then it would actually be helpful. You'd understand the purpose, and understand it immediately. You wouldn't have to decode the code -- you'd just read the brief remark and move on. That's what literate programming is about, in spirit -- writing code to be easily read at levels of the hierarchy. And very specifically not having to read every single line to figure out what it's doing.The original assertion that "This code is so literate, so easy to read" is demonstrably false. Naming something "pt" is the antithesis of literature programming. And if you insist on no comments, you'd at least need to name is something like "bbox_top_left". A generic variable name like "pt", that isn't even introduced in the context of a loop or anything, is a cardinal sin here.
Part of figuring out a reasonable level of commenting (and even variable naming) is a solid understanding of your audience. When in doubt aiming low is good practice, but keep in mind that this was 2D graphics software written at a 2D graphics software company.
To help understand, you need to see this code as math. Graphics programming algorithms are literally math.
You're asking for training wheels comments, which just get in the way for those who are familiar with the domain.
I'm sure a few graphics programming engineers might want calls to react useState(), useEffect(), etc. to be documented in a codebase, yet a react programmer would scoff at the idea.
i++
increments the loop variable. A newbie to programming might find such a comment useful, but to people who are maintaining such a piece of code that would be distracting line noise.It all depends on who your professional peer is that you are writing the code for. It's totally fine to write for a peer who is familiar with the domain, as it's fine to write for a beginner, for pedagogy, such as in a text book.
Also, breaking things down to more atomic functions wasn't the best idea for performance-sensitive things in those days, as compilers were not as good about knowing when to inline and not: compiler capabilities are a lot better today than they were 35 years ago...
I'm sure the code would be immediately obvious to anyone who would be working on it at the time.
Comments aren't unnecessary, they can be very helpful, but they also come with a high maintenance cost that should be considered when using them. They are a long-term maintenance liability because by design the compiler ignores them so its very easy to change/refactor code and miss changing a comment and then having the comment be misleading or just plain wrong.
These days one could make some sort of case (though I wouldn't entirely buy it, yet) that an LLM-based linter could be used to make sure comments do not get disconnected from the code they are documenting, but in 1990? not so much.
Would I have used longer variable names for slightly more clarity? Today, sure. In 1990, probably not. Temporal context is important and compilers/editors/etc have come a long way since then.
Because it's quite clear, everything is well named, and the filename also gives the context.
Clamps the result so it doesn’t go outside the document.
If the region is bigger than the document, it re-centers instead of snapping to (0,0).
Note this is a toxic license. Accepting it and/or reading of the code has potential for legal liability.
Still, applaud releasing the source code, even if encumbered. Preservation is most important, and any legal teeth will eventually expire with the copyright.
How would this potentially expose you to legal liability?
Taking his contribution for Photoshop into account, one could say that if you saw mainstream motion or still pictures in the Western world in the last three decades, you'll probably saw something influenced by him in one way or another.
FYI.., the version I used was registered to Apple. Apparently, the Knoll brothers demoed PS to apple and they promptly shared it amongst themselves and their buddies. Almost all illegitimate copies of it are derived from that pirated copy.
Fun fact… John knolls wife was the founding member of the Photoshop ‘Widows’ club… a home to people who have lost loved ones to software.
roschdal•1mo ago
KellyCriterion•1mo ago
and having the source available didnt help so far either :-))
panki27•1mo ago
KellyCriterion•1mo ago
And thats the irony covered in my post: Even that the source is available didnt motivate someone enough so far to create better version of the built
worldsavior•1mo ago
KellyCriterion•1mo ago
could you please show me a good textting tool plugin for GIMP, then?
you can check their forums & other sites: the textingtools is on top of their discussion lists?
shakna•1mo ago
So can you expand why you think the text tool, is bad?
KellyCriterion•1mo ago
Reddit: https://www.reddit.com/r/GIMP/comments/1fecr6u/suggestion_im...
Its just the first two results from top of Google.
Maybe the tool was improved in version 3.0, I'm running an older 2.x version. I will check it next time.
The versions were difficult in: - font size applying - random loss / reset settings - there were some issues with the preview when editting - font preview before selection etc.
shakna•1mo ago
The strange font sizes and setting reset was mostly fixed as part of the 2020 massive refactor [0]. There are still some minor inconsistencies between the two font editor panels, but they're being worked on.
Thankfully, you shouldn't have any random setting changes since about 2018 build.
[0] https://gitlab.gnome.org/GNOME/gimp/-/issues/344
ehnto•1mo ago
hiccuphippo•1mo ago
postexitus•1mo ago
nineteen999•1mo ago
It's not intuitive. It's actually possibly my most hated widget in the entire FOSS ecosystem.
RadiozRadioz•1mo ago
KellyCriterion•1mo ago
dist-epoch•1mo ago
pjmlp•1mo ago
rplnt•1mo ago
I feel like that has changed? Even Blender felt good the last time I used it, Firefox became kinda fine, though these are probably bad examples as they are both mainstream software. But what about OSS that is used primarily by OSS enthusiasts? What about GIMP now?
VoidWhisperer•1mo ago
cynicalsecurity•1mo ago
> To change GIMP to single-window mode (merging panels into one window), go to "Windows" in the top menu and select or check "Single-Window Mode"; this merges all elements like the Toolbox, Layers, and History into one unified view.
tonyedgecombe•1mo ago
Palomides•1mo ago
maxloh•1mo ago
Unfortunately, designers are rare among the FOSS community. You can't attract real casual or professional users if you don't recognize the value of professional UI/UX.
voidUpdate•1mo ago
trinix912•1mo ago
Whereas Photoshop and other "mainstream" software use terms and procedures non-programmers are more likely to be familiar with: heal this area with a patch, clone something with a clone stamp, scissors/lasso to cut something out (not saying GIMP doesn't have those)...
GaryBluto•1mo ago
maxloh•1mo ago