Money.
Are people really not aware of what the company's overall mission, product and impact is? I'm finding that hard to believe. If you accept employment at Facebook, regardless of what department you're in, you know exactly what kind of company you're contributing your time, energy and effort into.
let's be honest
everyone working as a software engineer at facebook is perfectly capable of finding employment elsewhere
working there is a deliberate decision to prioritise comp over the stability of the world
We're talking about software engineers here, not "cleaner taking up any job you can". Literally one of the most well paid jobs considering the amount of effort you put into it. People slave away on fields picking berries for less, with more impact on their life expectancy, if there is any career you can almost jump between jobs in just a few weeks, software engineering is one of them.
Yes? Why not? If I'd join a company and figured out what I did actually harmed more than helped, I'd leave that place, absolutely. I'm a software engineer, even with the lowest possible position in a random company I'd earn better than most people in the country and live a better life, even just the bottom 30% of earners in software in the country (not counting outsourcing obviously). Especially at that time it was very easy to find new jobs.
Edit: Thinking about it, your comment actually made me more frustrated than I realized. I've been poor enough to having to be homeless at some points in my life, and yes, I've worked for immoral companies too, because I need food and shelter. But once you move up in life to the comfy jobs like software engineering, you don't have any excuse anymore that it's just about "feeding your family" when you literally have a sea of jobs available. It might be an excuse you tell yourself to justify your reasoning for getting paid more, but if you truly did care about it, you'd make a different choice, and still be able to feed your family, and I'm almost confident your family would be OK with you making that choice too, unless they also lack the empathy you seem to be missing.
This affects the brightest of the bright and the less talented alike.
Google and Meta are surely more open than a classified missile project. So it would really be beyond the pale for someone to not realize that what they are working on is an additive platform, sure I am willing to bet they didn't say "Addictive" and instead cleaned it up in tidy corporate product management lingo, "highly engaged users" or something like that. But its just impossible.
Nobody would talk about whether the product is now “addictive” because that suggests crossing a finish line to completion, and we can’t ever be done.
Surely it's this, right? I just had what I would consider an intelligent conversation with someone wherein we eventually settled on a core ideological difference between us is that I thought all humans have equal value (infinite and immeasurable), while he believed a human's value is only as much as said human can generate money within capitalism (basically, if their salary or net worth is low, they must not be very valuable people, and we shouldn't do things for them like give them healthcare).
I think it's a bit of a dangerous fallacy to assume that intelligence naturally leads people to arriving at your own personal ideology. There were plenty of highly intelligent Nazis or Imperial Japanese. They either didn't care about the regimes they supported or leveraged their intelligence to rationalize it (requiring fallacy to do so of course - or perhaps not, if they really did just want their subgroup to dominate all others and believed it was possible to do so).
For me it's not smarts alone to define my value system. It can't be purely rationality, since the premise of deciding good and bad is subjective and dependent on what you value. You can argue these things rationally and use logic to determine outcomes, but at the end of the day there's a messy human brain deciding good/bad, important/not important, relevant/not relevant.
If you want to be an accountant, lawyer, surveyor et cetera, one has to learn about ethics, and violating ones professional institute's code of ethics may result in you being unable to practice in future.
I've yet to see an ethics module that covers ethics from the perspective of ethics over profit.
I refused the pressure to be unethical when I was pushed, even when I knew I would be fired (which I was). I was able to discuss it with old mentors, who made time to meet with me, even when I hadn't worked at their company for years.
Lastly I disclosed why I was fired at interview for a new job (without the confidential details), and was hired partly on the strength of it by a person who had been through much the same.
And I didn't learn it at University, I learnt it on my professional qualification, that was around 3 years long and was postgraduate level, although had non-degree based entry routes for technicians. It also required a wide range of supervised experience.
Maybe we should have Gavin Belson's Tethics be more widely taught???
Interestingly many accountants in the UK never did a degree (very many more did a degree in something unrelated), but came through the technician route of evening, weekend or day release study. Many do their chartered training at weekends.
The gap isn't education, it's accountability. Engineers building engagement loops know exactly what they're doing. They just don't have a professional body that can revoke their license for it.
> We don't even need formal regulation to start — just honest internal conversation
> They just don't have a professional body that can revoke their license for it.
What internal conversations could lead to a professional body that can revoke anyone's license? I'm sorry, but your comment doesn't make much sense.
Edit: Dammit, I realize now I think I fell for vibebait, leaving for posterity so others don't fall into the same trap.
And no, not vibebait — just a poorly structured comment from a guy with a fever typing on his phone.
I would just as soon call myself a software doctor or software lawyer. Or software architect.
From my understanding, software engineers are a long away out from this still but perhaps we'll get there once the dust settles on more of these sorts of lawsuits.
Same way amazon being big in india isn't just great because of the vast talent pool and 'low' costs in India (even if many if most indian programmers are subpar, they got over a billion people), they basically ensure that the government in India can never turn against Amazon, because these jobs are concentrated in a specific region and India isn't a unified state. Amazon can try many getting into many different things in India without having the risk associated some small foreign company breaking into India would have.
Typically, intelligent people get so much joy out of being able to do something (such as addicting the masses), they do not stop to think whether that's a good idea. Especially when that's the very thing that's fuelling their extremely lavish lifestyle.
I've heard "well, you have to change things from the inside" before.
And a lot of people have been there for a while, it wasn't always... quite as bad even if a lot of the warning signs were absolutely there.
I was actually just thinking to myself this morning that I literally have no idea what these feeds look like at this point, but more and more people seem to be looking at me with envy when I say I don't have any lol. I'm kind of curious and might ask my friends if I can see what they're looking at day to day if they'll show me.
Given you probably don't earn that today, say you got paid that now instead of whatever you earn, what would you spend that money on in reality?
This makes two of us. Nice to see a similar-minded person. Cheers to you!
https://writeforfun.mataroa.blog/blog/the-brightest-of-the-b...
Essentially a thought dump. Hope you can read it and we can discuss it.
Have a nice day!
I wrote the 6 minute mark to think of how long it took me to write the comment which was around ~50 minutes ish. And I mentioned you many times in the comment-ish because I had written something first and then wrote something on top of it & thus many initial mentions.
Anyways I have now removed the mentions and honestly a lot of this is just transparency efforts.
I literally just write whatever I am thinking :)
Intelligence is not particularly correlated to ethics or morality. Probably sounds obvious when I say it directly, but it is clearly something that you have banging around in the back of your mind. Bring it forward out of the morass of unexamined beliefs so you can see that it is clearly wrong, and update the rest of your beliefs that are implicitly based on the idea that intelligence somehow leads to some particular morality as appropriate.
A confectionary company invented a type of bubble gum (called "Umpty Candy") that became addictive not because it had any drugs in it, but because they kept optimising the taste until it became too delicious to refuse.
I’m starting to think that we need to push for more of the internet going behind paywalls, which is weird because I’ve always been somebody who claimed to hate walled gardens and supported “information should be free”.
The ease of creation of digital data has lead to the creation of an infinite sea of bullshit. Ads/attention economy are just the newest layer of this asymmetry. Curated datasets are a solution to the problem, as this was how old media worked. The problem is then how will it be paid for.
Same with search, or AI, clearly there's value, but it's hard to become a $1T company, while still be ethical. We need the world to be okay with much much small, less valuable tech companies.
I would argue that we fail completely at doing this (historically, too, see e.g. leaded gas).
This incentivizes companies toward net-negative behavior until it is fully regulated despite knowing better, because it is clear that it won't be really punished anyway and remain a net-positive for them.
It is a difficult problem though.
The guy is worth a quarter trillion dollars and doesn't seem intent on calling it a day, and insists on destroying society's youth so he can make more money. Intelligent or not, that's a mental disease.
Imagine having that sort of money where if 99% was taken away, you'd still have over 2 billion dollars to your name....and you refuse to just walk away and focus on things like your family, making the world a better place, or just enjoying your life. Tom took the money for MySpace and actually seems to enjoy time with his family, traveling, doing photography, etc.
For all his (many) faults, Gates took a look at the polio virus and said "I'm bigger than you" and pretty much spent until it was wiped out. Doesn't counteract the bad or the Epstein stuff at all, but wiping out polio has helped people.
Mark's done jack shit to genuinely help people besides his shareholders and his immediate family. One might argue that his whole bunker thing is an indicator that he's realized he's done tremendous damage to society, but instead of fixing it, he's insulating himself for when the proverbial bomb goes off.
I do think Instagram in particular has been a boon for small businesses, providing them visibility in the marketplace that was previously unavailable to them.
Social media has also been a way for communities to connect organically with discoverability features missing on the old web.
There are positives and negatives - if it was only negatives people would be quicker to abandon the platforms.
Although I'm not familiar with the case at hand, I agree there's potential there for real harm, especially to children.
When you define "addiction" as anything people who at a level you consider excessive, the word expands to cover every domain of life and so becomes worse than useless.
It is not a fair fight and to act like this is anything less than a corporate-run legal addiction machine is way too generous to these companies given what we know now. Sometimes I feel like people only consider something addictive if it involves slot machine mechanics or an actual narcotic. But we know now it’s much broader than that.
Your argument held water in 2010. Not in 2026. We know better now.
This is not about Alice liking or disliking it. This is about allowing Mark to engineer a system where statistically too many Bob's and Charlie's can't refuse (for the same reasons gambling is more common in poor communities), making the society worse off at a result.
What's amazing here is the Google and Facebook lawyers think they have a chance to persuade members of the public otherwise.
I think this may also be why there is so much sugar in American food. People buy more of the sweet stuff. So they keep making it sweeter.
I'm not sure who should be responsible. It kinda feels like a "tragedy of the commons" kind of situation.
The C-suite has learned not to put so much incriminating stuff into writing (after Apple/Google etc. got caught making blatantly illegal anti-poaching agreements in personal emails from folks like Jobs), so proving that is probably gonna be tough.
I can kill a person with a car either intentionally or unintentionally. Of course one is worse than the other, but both are ultimately bad and you should face justice for either of them, even if the punishment might be different because of the motivation/background. But neither should leave you as "innocent".
I am not saying that Facebook didn't try. I am just saying that only having access to screens, they would inevitably fail. Screens are very unlike addictive drugs and cannot directly alter neurochemistry (at least not any more than a sunset or any perception does). I strongly dislike the company and have personally never created a Facebook account nor used the website.
Mostly not. That's the lawyers' job. The jury listens.
> outside of their medical context
Well, sure. It's a legal context now. The defense get to make the medical argument, if they like. I think it's a losing one.
How do you know what the jury are saying?
>outside of their medical context
That's because medicine doesn’t own the language. People do. If the jury hear words used wrongly then, as speakers of the language, they can interpret it as they wish. They are about to hear from another attorney who will say the opposite to the first one, and they will decide which was most persuasive.
Screens are able to show you things which give you small short dopamine hits, enough to keep you engaged enough to try get more. This is exactly how addictive drugs, gambling etc all work.
It was fine for years because it was a generic service in which everybody was forced to view the same content in the same way.
They are very very different things.
The traditional answer is "engagement," but there is a strong argument to me made that intentional engagement (engagement by conscious, willful choice) is not possible, repetitively, for a vast smorgasbord of content spinning by at short intervals
“We see that you’re slightly conservative. Next up: a Nazi sympathizer video! Enjoy your ragebait!”
What I don't find plausible is the specific kind of harm alleged in the case discussed in the source article, where having videos attuned to your interests constantly fed to you causes you to become depressed and suicidal.
I'd argue that we basically incentivise companies to cause harm whenever it is unregulated and profitable because the profits are never sufficiently seized and any prosecution is a token effort at best.
See leaded gas, nicotine, gambling, etc. for prominent examples.
I personally think prosecution should be much harsher in an ideal world; if a company knows that its products are harmful, it should ideally be concerned with minimising that harm instead of fearing to miss out on profits without any legal worries.
> He contended the A was for addicting, the B for brains and the C for children.
I gotta admit, I find this really trivial and silly that this is how court cases go, but I understand that juries are filled of all sorts of people and lawyers I guess feel the need to really dumb things down? Or maybe it's the inner theater kid coming out?
Fact is defined by whatever the jury believes.
The lawyers are performers in a play, to some extent. Theatricality can pay off, in the right amounts.
The same will happen with expert witnesses; both bring in people willing to say virtually anything, for the right pay.
Whereas for jury members, the only people who could do that are other jury members, who would be just as clueless.
(I get that you don't want a jury with wildly different levels of domain knowledge. e.g. if you had one "expert" and the remainder being laymen, the expert could quickly dominate the entire jury - and there would be no one there to call out any bias from them)
This is absolutely not the case.
> and the other party can also interrogate them and try to show holes in their argumentation
Sure, and now the jury - with zero domain knowledge - sees two very confident sounding experts who disagree on a critical point... and you wind up with it coming down to which one was more likeable.
How can you tell if you're not also an expert?
> the other party can also interrogate them and try to show holes in their argumentation
Yes, and when the science is beyond the experience of the jury, experts giving opposite opinions will be as hard to distinguish as conflicting non-expert witness testimony (or even the testimony of the defendant compared to the accuser or litigant).
In my (extremely limited) experience, the latter two are probably true but not necessarily the first one. I've been called for jury duty exactly once so far, and it happened to coincide with a period where I wasn't particularly happy with my job situation and was pushing for some changes with my manager, which made me motivated to try to get picked so that I could stall a bit to see if my situation changed. As far as I could tell, almost everyone in the room full of like 40 people who were in the pool for the civil trial they put me in the room for first was trying to get out of it, and I ended up being the first person picked (out of I think 8 overall; there were only six jurors needed for this trial and if I recall correctly there were two alternates). It genuinely seemed to me like the lawyers were basically happy to have someone who actually wanted to do it rather than have to force someone to go who wasn't going to want to actually pay attention or take it seriously.
My guess would be that they don't want someone who's enthusiastic because they have a particular agenda that's against the verdict they're looking for. If you're a prosecutor, you're probably not going to want to pick someone who's obviously skeptical of law enforcement, and if you're a defense attorney, you're probably not going to want someone who's going to convict someone because they "look guilty". I'm not convinced that someone who really wants to be on a jury because they thought it looked fun on TV or something but otherwise doesn't have any clear bias towards one side or another would get weeded out, especially for most civil cases where people probably won't have as much concern about either letting a guilty person go free or putting an innocent person behind bars.
It’s also not a great sign that they’re relying on such tricks and props to hook the jury. In stronger cases they’ll rely on actual facts and key evidence, not grand but abstract claims using props like this.
I don’t know how the rest of the opening remarks went, but from the article it looks like Meta’s lawyers are already leaning into the actual evidence (or lack thereof) that their products were central to the problems:
> Meta attorney Paul Schmidt countered in opening remarks to the jury that evidence will show problems with the plaintiff's family and real-world bullying took a toll on her self-esteem, body image and happiness rather than Instagram.
> "If you took Instagram away and everything else was the same in Kaley's life, would her life be completely different, or would she still be struggling with the same things she is today?" Schmidt asked, pointing out an Instagram addiction is never mentioned in medical records included in the evidence.
Obviously this is HN and we’re supposed to assume social media is to blame for everything, but I would ask people to try not to be so susceptible to evidence-free narrative persuasion like the ABC trick. Similar tricks were used years ago when it was video games, not social media, in the crosshairs of lawyers looking to extract money and limit industries. Many of the arguments were the same, such as addicting children’s brains and being engineered to capture their attention.
There’s a lot of anger in the thread about Discord starting to require ID checks for some features, but large parts of HN aren’t making the connection to these lawsuits and cases as the driving factor behind these regulations. This is how it happens, but many are cheering it on without making the connection. Or they are welcoming regulations but they have a wishful thinking idea that they’re only going to apply to sites they don’t use and don’t like, without spilling over into their communication apps and other internet use.
I feel like this must be an indication of an inherent flaw with a society designed around the idea that litigation originates in an individual's singular harm received from a company, outside of I guess class action lawsuits, which to be fair, I don't know much about. But I'm reminded of the McDonald's coffee case, when McDonald's was able to leverage their incredible capital power to make that woman look like such a crazy litigious hysterical lady that people to this day use it as an example of how Americans are just inherently trivially litigious, when in reality, that coffee was just way too hot.
Nobody can stand up to the might of a trillion dollar company. The resources they have at hand are just too vast.
Which is why Americans need to curb the lobbying power and communications power of trillion dollar corporations, and limit the rights corporations have versus the rights of human citizens.
Or if a company is too big to be held accountable, it needs to be broken up aggressively.
Social media technology, according to former employees, is intentionally engineered to capitalize on dependency, unbeknownst to the user, came with no rating system or warnings, and hosts real interactions not simulated ones.
I think they have a much better case here.
The interns at work talk incessantly about gambling. It’s just weird and wrong.
social media platforms however, ...
Cigarettes were 100% engineered for addiction.
>I do not imagine these companies to build these products in a way that maximises their addiction.
Holy hell, you need to go watch the big tobacco trials of the past to see the CEOs knew exactly what they were doing. And then how much they spent on outright buying scientists to say bullshit so they could drag the matter on for years.
Tobacco set the stage for companies to use doubt of every kind to hide their intentional actions.
If you think its not and just "similar to addiction", just try blocking these sites in your browser/phone and see how long you last before feeling negative effects.
Seems not so far back the Sacklkers were proven(?) to have profited and fueled the oiod crisis while colluding with the healthcare industry - and last i heard they were haggling over the fine to pay to the state. While using various financial loopholes to hide their wealth under bankrupcy and offshore instruments.
What then the trillion dollar companies that can drag out appeals for decades and obfuscate any/all recommendations that may be reached.
I recall Philip Morris pivoting their main business when it began hemorrhaging money. Essentially this pivot came in the form of becoming the largest “box-to-mouth” food producers in the world, of course applying the same addictive principles that fueled their tobacco empire to maintain profitability.
I doubt, however, social media corps like Meta will follow suit today—mostly because accountability feels more like a suggestion these days.
I find myself in the uncomfortable position of sympathizing with both sides of the argument - a yes-but-no position.
When I worked there every week there would be a different flyer on the inside of the bathroom stall door to try to get the word out about things that really mattered to the company.
One week the flyer was about how a feed video needed to hook the user in the first 0.2 seconds. The flyer promised that if this was done, the result would in essence have a scientifically measurable addictive effect, a brain-hack. The flyer was to try to make sure this message reached as many advertisers as possible.
It seemed to me quite clear at that moment that the users were prey. The company didn't even care what was being sold to their users with this brain-reprogramming-style tactic. Our goal was to sell the advertisers on the fact that we were scientifically sure that we had the tools to reprogram our users brains.
What it is is the consequence of the power existing. 200 years ago nobody was arguing about how to hook people in the first 0.2 seconds of video, but it's not because nobody would have refused the power it represents if offered. They just couldn't have it. It's humans. People want this power over you. All of them.
It's sociopaths and narcissists which want it.
And as Atlas667 pointed out, it's also a direct consequence from a capitalistic world view, where it has replaced your morals.
This is not in relationship to state propaganda. Multiple things can cause abhorrent behavior, and just because we've identified something as problematic doesn't inherently imply that other unrelated examples are any better.
There are certainly well adjusted people that would like to fix things they feel are inefficiencies or issues in their government, especially when those issues are directly related to their areas of expertise. Thinking well adjusted people wouldn't want to be in a position of power is exactly how you ensure that only bad people end up with power.
Your comment speaks droves about you, not humanity.
“Most” people won’t act badly to attain this power, “some” will. Being placed into a position and choosing harm is not the same as pursuing it.
At least an unhealthy amount of them. I have no desire to have power over people, except I would like it if my kids actually listened to me...
Like I say, maybe everyone else is accustomed to this idea, but if you have any pictures of them I think a lot of people would be interested in seeing it, unless I'm misunderstand what it is
Stuff like how to reduce nesting logic, how to restructure APIs for better testing, etc.
People usually like them. I can't say I've seen what the parent post described so I imagine it's "the other" FAANG mentioned here.
Every high traffic flat space on the wall would be covered with a poster, most of them with designs lifted from US WWII propaganda, many hard to tell if satire or not. I was surprised there was never one about carpooling with der füher.
While I've not seen this in every single place I've worked, it's very common.
I thought it was kind of pathetic how quickly they shoved ipads into schools with no real long term data, no research whatsoever. Just insane really. And now here we are yet again.
Don't consume your own product.
"I smoke weed to get high, it's not the weed that gets me high"
"Social media is addicting, it's not the social media that makes it addictive."
1) You can't stalk someone deliberately and persistently, using any means, or medium; even if you're a company, and even if you have good intentions.
2) You can't intentionally influence any number of people towards believing something false and that you know is against their interest.
These things need to be felony-level or higher crimes, where executives of companies must be prosecuted.
Not only that, certain crimes like these should be allowed to be prosecuted by citizens directly. Especially where bribery and threats by powerful individuals and organizations might compromise the interests of justice.
The outcome of this trial won't amount to anything other than fines. The problem is, this approach doesn't work. They'll just find different ways that can skirt the law. Criminal consequence is the only real way to insist on justice.
Unsealed court documents show teen addiction was big tech's "top priority"
1. We sell ads to make money 2. If we keep eyeballs on our apps more than competing apps, we can increase the price for our ads and make more money 3. Should we implement limits to kick kids off the app after they've been doomscrolling for an hour? Absolutely not, that would violate our duty to our shareholder. If parents complain, we'll say they should implement the parental controls present on their phones and routers. We can't make choices to limit our income if parents don't use the tools they already have.
I'm sorry that social media has ruined so many kids' lives, but I don't think the responsibility lies with the tech companies in this case. It lies with the society that has stood by idly while kids endured cyber-bullying and committed suicide. This isn't something that happened recently- the USA has had plenty of time to respond as a society and chosen not to. Want to sue someone? Sue Congress.
Google and Meta are rational actors in a broken system. If you want to change something you should change the rules that they operate under and hold them accountable for those rules going forward. Australia (and Spain) is doing something about it- now that social media is banned for kids under 16 in those countries, if social media companies try to do anything sneaky to get around that you actually have a much stronger case.
Now if there were evidence that they were intentionally trying to get kids bullied and have them commit suicide then by all means, fine them into oblivion. But I doubt there is such evidence.
Lerc•1h ago
These are opening remarks, Perhaps we should wait until they actually present evidence.
6stringmerc•1h ago
malfist•1h ago
elevatortrim•1h ago
malfist•52m ago
Heavily redacted document talking about the mass notifications: https://storage.courtlistener.com/recap/gov.uscourts.cand.40...
Here's reporting on this and other documents: https://techoversight.org/2026/01/25/top-report-mdl-jan-25/
HN discussion of it: https://news.ycombinator.com/item?id=46902512
chinathrow•1h ago