It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)
Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.
(Also the quirky three-point bullet list with a three-point summary at the end and bolded leadoffs to each bullet point is totally an AI thing too.)
I think its definitely stronger in MS as my friend on the inside tells me, than most places.
There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.
I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
mips_avatar•35m ago
nickff•22m ago
mips_avatar•11m ago
smikhanov•19m ago
smikhanov•17m ago
mips_avatar•16m ago
rawgabbit•17m ago
As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.
cosmicgadget•10m ago
nrhrjrjrjtntbt•7m ago
I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.
I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.
chankstein38•7m ago
As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.
So yeah I guess I'm just curious what the conclusion presented here is meant to be?
throwaway_dang•7m ago