I’ve found it very helpful in the same vein as RFC 2119 terminology (MUST, SHOULD, MAY, etc.); when you need your meanings to be understood by a counterparty and can agree on a common language to use.
This applies to any repeated chance, so it probably doesn't need to be called out again when translating odds to verbal language.
One off, 1 out of N errors are really hard to interpret, even with clear objective standards.
And emphasizes why mapping regular language to objective meanings is a necessity for anything serious, but can still lead to problematic interactions.
Probability assessments are will almost always sometimes certainly could be considered likely hard!
Also, what likelihood can we assign to claims that the virus was deliberately modified at the furin cleavage site as part of a gain-of-function research program aimed at assessing the risks of species-jumping behavior in bat coronaviruses? This is a separate question from the lab escape issue, which dould have involved either a collected wild-type virus or one that had been experimentally modified.
Perhaps experts in the field 'misinterpreted the evidence' back in the early months of the pandemic, much as happened with the CIA and its 'intelligence on Iraq'?
https://interestingengineering.com/health/us-doe-says-covid-...
Even if WIV did try to create COVID, they couldn’t have. As Yuri said, COVID looks like BANAL-52 plus a furin cleavage site. But WIV didn’t have BANAL-52. It wasn’t discovered until after the COVID pandemic started, when scientists scoured the area for potential COVID relatives. WIV had a more distant COVID relative, RATG-13. But you can’t create COVID from RATG-13; they’re too different. You would need BANAL-52, or some as-yet-undiscovered extremely close relative. WIV had neither.
Are we sure they had neither? Yes. Remember, WIV’s whole job was looking for new coronaviruses. They published lists of which ones they had found pretty regularly. They published their last list in mid-2019, just a few months before the pandemic. Although lab leak proponents claimed these lists showed weird discrepancies, this was just their inability to keep names consistent, and all the lists showed basically the same viruses (plus a few extra on the later ones, as they kept discovering more). The lists didn’t include BANAL-52 or any other suitable COVID relatives - only RATG-13, which isn’t close enough to work.
Could they have been keeping their discovery of BANAL-52 secret? No. Pre-pandemic, there was nothing interesting about it; our understanding of virology wasn’t good enough to point this out as a potential pandemic candidate. WIV did its gain-of-function research openly and proudly (before the pandemic, gain-of-function wasn’t as unpopular as it is now) so it’s not like they wanted to keep it secret because they might gain-of-function it later. Their lists very clearly showed they had no virus they could create COVID from, and they had no reason to hide it if they did.
COVID’s furin cleavage site is admittedly unusual. But it’s unusual in a way that looks natural rather than man-made. Labs don’t usually add furin cleavage sites through nucleotide insertions (they usually mutate what’s already there). On the other hand, viruses get weird insertions of 12+ nucleotides in nature. For example, HKU1 is another emergent Chinese coronavirus that caused a small outbreak of pneumonia in 2004. It had a 15 nucleotide insertion right next to its furin cleavage site. Later strains of COVID got further 12 - 15 nucleotide insertions. Plenty of flus have 12 to 15 nucleotide insertions compared to other earlier flu strains.
.... COVID’s furin cleavage site is a mess. When humans are inserting furin cleavage sites into viruses for gain-of-function, the standard practice is RRKR, a very nice and simple furin cleavage site which works well. COVID uses PRRAR, a bizarre furin cleavage site which no human has ever used before, and which virologists expected to work poorly. They later found that an adjacent part of COVID’s genome twisted the protein in an unusual way that allowed PRRAR to be a viable furin cleavage site, but this discovery took a lot of computer power, and was only made after COVID became important. The Wuhan virologists supposedly doing gain-of-function research on COVID shouldn’t have known this would work. Why didn’t they just use the standard RRKR site, which would have worked better? Everyone thinks it works better! Even the virus eventually decided it worked better - sometime during the course of the pandemic, it mutated away from its weird PRRAR furin cleavage site towards a more normal form.
COVID is hard to culture. If you culture it in most standard media or animals, it will quickly develop characteristic mutations. But the original Wuhan strains didn’t have these mutations. The only ways to culture it without mutations are in human airway cells, or (apparently) in live raccoon-dogs. Getting human airway cells requires a donor (ie someone who donates their body to science), and Wuhan had never done this before (it was one of the technologies only used at the superior North Carolina site).
The reality here is that there are thousands of mammalian viruses that don't infect humans that could be modified to infect humans via specific modifications of their target mammalian cell-surface receptor proteins, as was done in this specific case of a bat coronavirus modified at its furin cleavage site to make it human-targetable. Any modern advanced undergrad student in molecular biology could explain this to you, if you bothered to listen.
So first, we need an acknowledgement of Covid that vastly embarrasses China and the USA, and second we need a global treaty banning this generation of novel human pathogens from wild mammalian viral types... I guess I won't hold my breath.
I’ll never forgot old World of Warcraft discussions about crit probability. If a particular sequence is “one in a million” and there are 10 million players and each player encounters hundreds or thousands of sequences per day then “one in a million” is pretty effing common!
I'd argue that it doesn't depend on that at all. That is, its meaning is the same whether you're performing the trial once, a million times, ten million times, or whatever. It's rather whether the implication is "the possibility may be disregarded" or "this should be expected to happen occasionally" that depends on how many times you're performing the trial.
[I]n my affidavit, I wrote that SQL schemas would provide “only marginal value” to an attacker. Big mistake. Chicago jumped on those words and said “see, you yourself agree that a schema is of some value to an attacker.” Of course, I don’t really believe that; “only marginal value” is just self-important message-board hedging. I also claimed on the stand that “only an incompetently built application” could be attacked with nothing but it’s schema. Even I don’t know what I meant by that.
His post: https://sockpuppet.org/blog/2025/02/09/fixing-illinois-foia/
My post: https://mchap.io/losing-a-5yr-long-illinois-foia-lawsuit-for...Not really.
>I wrote that SQL schemas would provide “only marginal value” to an attacker. Big mistake. Chicago jumped on those words and said “see, you yourself agree that a schema is of some value to an attacker.”
The City of Chicago's argument was that something of ANY value, no matter how insignificant, would help an attacker exploit their system, and was therefore possible to keep secret under the FOIA law.
So obviously there must be some threshold for the value to an attacker. He attempted to communicate that schemas are clearly below such a threshold and they used his wording to attempt to argue the opposite.
I’m glad that argument lost, since it totally subverts the purpose and intention of the FOIA. Any piece of information could be of value to some attacker, but that doesn’t outweigh the need for transparency.
> “see, you yourself agree that a schema is of some value to an attacker.”
IANAL, it appears justice systems universally interpret this type of "technically yes if that makes you happy but honestly unlikely" statements as "yes with technical bonus", not a "no with extra steps" at all, and it has to be shortened as just "unlikely from my professional perspective" or something lawyer approved for intended effect. Courts are weird.
Despite all that, Chicago still pushes back aggressively. Here's a fun one from a recent denial letter they sent for data within the same database:
"When DOF referred to reviewing over 300 variable CANVAS pages, these are not analog sequential book style pages of data. Instead, they are 300 different webpages with unique file layouts for which there is no designated first page."
This is after I requested every field reflected in within the 300 different pages because it would be unduly burdensome to go through. I'm waiting for the city's response for the TOP page rather than the FIRST page. It's asinine that we have to do this in order to understand how these systems can blindly ruin the lives of many.They also argued the same 7(1)(g) exemption despite me being explicit about not wanting the column names. Effectively turning their argument into them saying that the release of information within a database, fullstop, is exempt because it could be used to figure out what data exists within a database. That's against the spirit of IL FOIA, which includes this incredibly direct statutory language:
Sec. 1.2. Presumption. All records in the custody or possession of a public body are presumed to be open to inspection or copying. Any public body that asserts that a record is exempt from disclosure has the burden of proving by clear and convincing evidence that it is exempt.
https://www.documentcloud.org/documents/25930500-foia-burden...https://www.documentcloud.org/documents/25930501-foia-burden...
Ultimately, we lost because the Illinois Supreme Court interpreted the statute such that "file layouts" were per se exempt, regardless of how dangerous they were(n't), and then decided SQL schemas were "file layouts".
(SQL schemas are basically the opposite of file layouts, but whatever).
While I laud the gracious application of Hanlon's Razor here, I also think that, for at least some actors, the imprecision was the feature they needed, rather than the bug they mistakenly implemented.
I spun up a quick survey[1] that I sent out to friends and family to try to get some numbers on these sorts of phrases. Results so far are inconclusive.
You're right (technically correct, which is the best etc.)! That is why "almost all" can mean everything except rational numbers.
I am a mathematician, but, even so, I think that this is one of those instances where we have to admit that we have mangled everyday terminology when appropriating it, and so non-measure theoretic users should just ignore our definition. (Similarly with "group," where, at the risk of sounding tongue-in-cheek because it's so obvious, if I were trying to analyze how people usually understand its everyday meaning I wouldn't include the requirement that every element have an inverse.)
If there's a finite subset of an infinite set, almost all members of the infinite set are not in the finite set. E.g. Almost all integers are not 5: the set of integers equal to five is finite and the set of integers not equal to five is countably infinite.
Likewise for two infinite sets of different size: Almost all real numbers are not integers.
Etc.
if you're a teacher and one student per class does the same thing - it's common. Even though it's only 1/25 or 1/30 of all students
1. It’s generally difficult to quantify such risks in any meaningful manner
2. Provision of any number adds liability, and puts you in a damned-if-does, damned-if-it-doesn’t-work-out situation
3. The operating surgeon is not the best to quantify these risks - the surgeon owns the operation, and the anaesthesiologist owns the patient / theatre
4. Gamblers quantify risk because they make money from accurate assessment of risk. Doctors are in no way incentivised to do so
5. The returned chance of 1/3 probably had an error margin of +/-33% itself
According to the literature 33 out of 100 patients who underwent this operation in the US within the past 10 years died. 90% of those had complicating factors. You [ do / do not ] have such a factor.
Who knows if any given layman will appreciate the particular quantification you provide but I'm fairly certain that data exists for the vast majority of serious procedures at this point.
I've actually had this exact issue with the veterinarian. I've worked in biomed. I pulled the literature for the condition. I had lots of different numbers but I knew that I didn't have the full picture. I'm trying to quantify the possible outcomes between different options being presented to me. When I asked the specialist, who handles multiple such cases every day, I got back (approximately) "oh I couldn't say" and "it varies". The latter is obviously true but the entire attitude is just uncooperative bullshit.
> puts you in a damned-if-does, damned-if-it-doesn’t-work-out situation
Not really. Don't get me wrong, I understand that a litigious person could use just about anything to go after you and so I appreciate that it might be sensible to simply refuse to answer. But from an academic standpoint the future outcome of a single sample does not change the rigor of your risk assessment.
> Doctors are in no way incentivised to do so
Don't they use quantifications of risk to determine treatment plans to at least some extent? What's the alternative? Blindly following a flowchart? (Honest question.)
> The returned chance of 1/3 probably had an error margin of +/-33% itself
What do you mean by this? Surely there's some error margin on the assessment itself but I don't see how any of us commenting could have any idea what it might have been.
Everyone has complicating factors. Age, gender, ethnicity, obesity, comorbidities, activity level, current infection status, health history, etc. Then you have to factor in the doctor's own previous performance statistics, plus the statistics of the anaesthesiologist, nursing staff, the hospital itself (how often do patients get MRSA, candidiasis, etc.?).
And, of course, the more factors you take into account, the fewer relevant cases you have in the literature to rely on. If the patient is a woman, how do you correctly weight data from male patients that had the surgery? What are the error bars on your weighting process?
It would take an actuary to chew through all the literature and get a maximally accurate estimate based on the specific data that is known for that patient at that point in time.
Increase the cost of the fallout of a decision (your relationships, your bosses job, your orgs existence, economy, national security etc etc) and the real fun starts.
People no matter what they say about other people's risk avoidance, all start behaving the same way as the cost increases.
This is why we end up with Trump like characters up the hierarchy, every where you look, cause no one capable of appreciating the odds, wants to be sitting in those chairs and being held responsible for all kinds of things outside their control.
Its also the reason why we get elaborate Signalling (costumes/rituals/pageantry/ribbons and medals/imposing buildings/PR/Marketing etc) to shift focus away from quantifying anything. See Theory of the Leisure Class. Society hasn't found better ways to keep Groups together while handling complexity the group is incapable of handling. Even small groups will unravel if there is too much focus on low odds of a solution.
"So...you're telling me there is a chance!"
Partial HTML: https://history.state.gov/historicaldocuments/frus1951v04p2/...
Full text PDF scan: https://www.cia.gov/readingroom/docs/CIA-RDP79R01012A0007000...
The goal is to remove uncertainty in the language when documenting/discussing situations for the state.
It doesn't matter that it's wrong colloquially or "feels wrong". It's that when you're reading or talking about a subject with the government, you need to use a specific definition (and thusly change your mental model because everyone is doing as such) so that no one gets misunderstood.
Would it be better to always use raw numbers? Honestly I don't know.
mempko•7h ago
throwaway81523•7h ago
I haven't tried this myself and haven't run across a situation to apply it to lately, but I thought it was interesting.
andrewflnr•7h ago
I kind of see how this might be useful, but what I've actually seen is an illusion of certainty from looking at numbers and thinking that means you're being quantitative instead of, you know, pulling things out of your butt. Garbage in, garbage out still applies.
throwaway81523•7h ago
plorkyeran•6h ago
widforss•6h ago
chrisweekly•6h ago
JadeNB•5h ago
I remember an appealing description of the difference being that a precise archer might sink all their arrows at the same spot on the edge of the target, whereas an accurate archer might sink all their arrows near the bull's eye without always hitting the same spot.
Jach•2h ago
Someone mentioned fermi calculations, a related fun exercise in this sort of logic is the work on grabby aliens: https://grabbyaliens.com/
Muromec•7h ago
csours•6h ago
layer8•6h ago
konstantinua00•5h ago
3 months have passed, 9 to go :)