frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
262•isitcontent•19h ago•33 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
16•sandGorgon•2d ago•3 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
362•vecti•22h ago•162 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
336•eljojo•22h ago•206 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
80•phreda4•19h ago•14 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
94•antves•2d ago•70 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•3h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
52•nwparker•1d ago•11 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
27•dchu17•1d ago•12 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
153•bsgeraci•1d ago•64 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
18•denuoweb•2d ago•2 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
7•sakanakana00•4h ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•5h ago•1 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
3•nmfccodes•1h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
19•NathanFlurry•1d ago•9 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
2•melvinzammit•7h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•7h ago•2 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
10•michaelchicory•9h ago•3 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•1d ago•8 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
6•rahuljaguste•19h ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
17•keepamovin•10h ago•5 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
23•JoshPurtell•1d ago•5 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•12h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•13h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
2•rs545837•14h ago•1 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
568•deofoo•6d ago•166 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
12•KevinChasse•1d ago•16 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
10•sawyerjhood•1d ago•0 comments
Open in hackernews

Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)

https://www.getevidex.com
36•amber_raza•1mo ago
Hi HN,

I’m a solo dev building a clinical search engine to help my wife (a resident physician) and her colleagues.

The Problem: Current tools (UpToDate/OpenEvidence) are expensive, slow, or increasingly heavy with pharma ads.

The Solution: I built Evidex to be a clean, privacy-first alternative. Search Demo (GIF): https://imgur.com/a/zoUvINt

Technical Architecture (Search-Based RAG): Instead of using a traditional pre-indexed vector database (like Pinecone) which can serve stale data, I implemented a Real-time RAG pattern:

Orchestrator: A Node.js backend performs "Smart Routing" (regex/keyword analysis) on the query to decide which external APIs to hit (PubMed, Europe PMC, OpenAlex, or ClinicalTrials.gov).

Retrieval: It executes parallel fetches to these APIs at runtime to grab the top ~15 abstracts.

Local Data: Clinical guidelines are stored locally in SQLite and retrieved via full-text search (FTS) ensuring exact matches on medical terminology.

Inference: I’m using Gemini 2.5 Flash to process the concatenated abstracts. The massive context window allows me to feed it distinct search results and force strict citation mapping without latency bottlenecks.

Workflow Tools (The "Integration"): I also built a "reasoning layer" to handle complex patient histories (Case Mode) and draft documentation (SOAP Notes). Case Mode Demo (GIF): https://imgur.com/a/h01Zgkx Note Gen Demo (GIF): https://imgur.com/a/DI1S2Y0

Why no Vector DB? In medicine, "freshness" is critical. If a new trial drops today, a pre-indexed vector store might miss it. My real-time approach ensures the answer includes papers published today.

Business Model: The clinical search is free. I plan to monetize by selling billing automation tools to hospital admins later.

Feedback Request: I’d love feedback on the retrieval latency (fetching live APIs is slower than vector lookups) and the accuracy of the synthesized answers.

Comments

neil_naveen•1mo ago
FYI, You are using Clerk in development mode
amber_raza•1mo ago
Oof, good catch! I must have left the test keys active in the deployment config.

Swapping them to production keys right now. Thanks for the heads up!

bflesch•1mo ago
Somehow "clerk" is on my ublock origin blocklist and therefore the whole website is not loading. I didn't add "clerk" to the blocklist so it must've been added by one of the blocklists that ublock origin is subscribed to, so there must be a good reason why "clerk" is on that blocklist.

When building a product for medical audience which might care a lot about privacy maybe don't use components which are shady enough that they end up on blocklists.

Edit:

> Why no Vector DB? In medicine, "freshness" is critical. If a new trial drops today, a pre-indexed vector store might miss it. My real-time approach ensures the answer includes papers published today.

This is total rubbish - did you talk to a single medical practitioner when building this? Nobody will do new treatments on their patients if a new paper was "published" (whatever that means, just being added to some search index). These people require trusted source, experimental treatment is only done for private clients who have tried all other options.

amber_raza•1mo ago
Thanks for the feedback—this is helpful.

1. Re: Clerk/uBlock: You were spot on. The default Clerk domain often gets flagged by strict blocklists. I just updated the DNS records to serve auth from a first-party subdomain (clerk.getevidex.com) to resolve this. It should be working now.

2. Re: Freshness & 'Rubbish': You are absolutely right that standard of care doesn't (and shouldn't) change overnight based on one new paper.

However, the decision to ditch the Vector DB for Live Search wasn't about pushing 'experimental treatments'—it was about Safety and Engineering constraints:

Retractions & Safety Alerts: A stale vector index is a safety risk. If a major paper is retracted or a drug gets a black-box warning today, a live API call to PubMed/EuropePMC reflects that immediately. A vector store is only as good as its last re-index.

The 'Long Tail': Vectorizing the entire PubMed corpus (35M+ citations) is expensive and hard to keep in sync. By using the search APIs directly, we get the full breadth of the database (including older, obscure case reports for rare diseases) without maintaining a massive, potentially stale index.

The goal isn't to be 'bleeding edge'—it's to be 'currently accurate'.

breadislove•1mo ago
a good system (like openevidence) indexes every paper released and semantic search can incredible helpful since the the search api of all those providers are extremely limited in terms of quality.

now you get why those system are not cheap. keeping indexes fresh, maintaining high quality at large scale and being extremely precise is challenging. by having distributed indexes you are at the mercy of the api providers and i can tell you from previous experience that it won't be 'currently accurate'.

for transparency: i am building a search api, so i am biased. but i also build medical retrieval systems for some time.

amber_raza•1mo ago
Appreciate the transparency and the insight from a fellow builder.

You are spot on that maintaining a fresh, high-quality index at scale is the 'hard problem' (and why tools like OpenEvidence are expensive).

However, I found that for clinical queries, Vector/Semantic Search often suffers from 'Semantic Drift'—fuzzily matching concepts that sound similar but are medically distinct.

My architectural bet is on Hybrid RAG:

Trust the MeSH: I rely on PubMed's strict Boolean/MeSH search for the retrieval because for specific drug names or gene variants, exact keyword matching beats vector cosine similarity.

LLM as the Reranker: Since API search relevance can indeed be noisy, I fetch a wider net (top ~30-50 abstracts) and use the LLM's context window to 'rerank' and filter them before synthesis.

It's definitely a trade-off (latency vs. index freshness), but for a bootstrapped tool, leveraging the NLM's billions of dollars in indexing infrastructure feels like the right lever to pull vs. trying to out-index them.

jyscao•1mo ago
This sounds like a cookie cutter ChatGPT reply.
amber_raza•1mo ago
Haha, ouch. I promise it’s just me—I just spent 20 minutes rewriting that comment because I didn't want to sound like an idiot explaining search to a search engineer. I'll take it as a sign to dial back the formatting next time.
bflesch•1mo ago
That emdash in your reply is so in-your-face "—".
bflesch•1mo ago
Now it is loading. You are still in violation of GDPR rules by including a SVG file with the google logo from the clerk.com domain and a css file from tailwindcss.com - both are tracking users. There is no privacy policy on your page. The privacy policy should include a list of companies you share my visitor data with and what kind of data is shared, and how I can deny sharing that data.
amber_raza•1mo ago
Fair point on the Privacy Policy link. That definitely slipped through the cracks in the launch rush. I just pushed a fix to add it to the footer now.

Re: the trackers: The SVG is just the icon inside the Clerk login button, but you're right that loading Tailwind via CDN isn't ideal for strict GDPR IP-masking. I'll look into self-hosting the assets to clean that up.

adit_ya1•1mo ago
Out of curiosity, what's the prioritization of evidence (RTC Metanalysis > RTC > observational ) etc, and what's the end user benefit over a tool like OpenEvidence? You mention that other tools are expensive, slow, or increasingly heavy with pharma ads, but OpenEvidence for now seems to be pretty similiar with offerings, speed, and responses. What's your pitch as to why one should prefer this?
amber_raza•1mo ago
Great questions.

1. Prioritization: I instruct the model to prioritize evidence in this hierarchy: Meta-Analyses & Systematic Reviews > RCTs > Observational Studies > Case Reports. It explicitly deprioritizes non-human studies unless specified.

2. Why not OpenEvidence? OE is excellent! But we made two architectural choices to solve different problems:

'Long Tail' Coverage: OE relies on a pre-indexed vector store, which often creates a blind spot for niche/rare diseases where papers aren't in the 'Top 1% of Journals.' Because Evidex queries live APIs, we catch the obscure case reports that static indexes often prune out.

Workflow: OE is a 'Consultant' (Q&A). Evidex is a 'Resident' (Grunt work). The 'Case Mode' is built to take messy patient histories and draft the actual documentation (SOAP Notes/Appeals) you have to write after finding the answer.

eoravkin•1mo ago
Out of curiosity, did you actually see any pharma ads on OpenEvidence?
amber_raza•1mo ago
Great question. I haven't seen banner ads on OpenEvidence yet, but the 'hidden tax' of free tools is often Publisher Bias.

Users have noted that some current tools heavily overweight citations from 'Partner Journals' (like NEJM/JAMA) because they index the full text, effectively burying better papers from non-partner journals in the vector retrieval.

My goal is strictly Neutral Retrieval. By hitting the PubMed/OpenAlex APIs live, Evidex treats a niche pediatric journal with the same relevance weight as a major publisher, ensuring the 'Long Tail' of evidence isn't drowned out by business partnerships.

breadislove•1mo ago
this might be interesting: https://www.theinformation.com/articles/chatgpt-doctors-star...

> $150M RR on just ads, +3x from August. On <1M users.

source: https://x.com/ArfurRock/status/1999618200024076620

amber_raza•1mo ago
Whoa. $150M ARR on ads is a wild stat.

Thanks for sharing that source. It really validates the thesis that unless the user pays (SaaS), the Pharma companies are the real customers.

eoravkin•1mo ago
You built a cool product. I'm actually one of the founders of https://medisearch.io which is similar to what you are building. I think the long-tail problem that you describe can be solved in other ways than with live APIs and you may find other problems with using live APIs.
amber_raza•1mo ago
Thanks! I just took a look at MediSearch. It looks really clean.

You are definitely right that Live APIs come with their own headaches (mostly latency and rate limits).

For now, I chose this path to avoid the infrastructure overhead of maintaining a massive fresh index as a solo dev. However, I suspect that as usage grows, I will have to move toward a hybrid model where I cache or index the 'head' of the query distribution to improve performance.

Always great to meet others tackling this space. I’d love to swap notes sometime if you are open to it.

dataviz1000•1mo ago
I'm working on building an AI agent that creates queries over a time-series database focused on financial data. For example, it can quantify Federal Reserve reports and generate a table showing how SPY reacted 30 minutes after, at EoD, at the next day’s open, and at the next day’s EoD. It will plan the database query and then query the data from a materialized view. It is magic!

How would biomedical researchers use tons of time-series data? A better question is: what questions are biomedical researchers asking with time-series data? I'm a lot more interested in generalized querying over time-series data than just financial data. What would be a great proof of concept?

amber_raza•1mo ago
That sounds like a fascinating project.

To answer your question: In the biomedical world, the 'Time-Series' equivalent is Patient Telemetry (Continuous Glucose Monitors, ICU Vitals, Wearables).

The Question Researchers Ask: 'Can we predict sepsis/stroke 4 hours before it happens based on the velocity of change in Heart Rate + BP?'

Right now, Evidex is focused on the Unstructured Text (Literature/Guidelines) rather than the structured time-series data, but the 'Holy Grail' of medical AI is eventually combining them: Using the Literature to interpret the Live Vitals in real-time.

jph•1mo ago
Great project. Want to contact me when you'd like to talk? I do software engineering for clinicians at a health care organization, and I'd love to have my teams try your work in their own contexts. Email joel@joelparkerhenderson.com.
amber_raza•1mo ago
Thanks, Joel! This is exactly the kind of clinical workflow I built 'Case Mode' for.

I will send you an email shortly to get connected. I'd love to get your teams set up with a pilot instance. Appreciate the reach out.

OutOfHere•1mo ago
All such custom sites are increasingly unnecessary since modern thinking AIs like ChatGPT 5.2 Extended and Gemini 3 Pro do an incredible job surfacing good papers. In my experience, the benefit comes from using multiple AIs because they all have blind spots, and none is pareto optimal.

As a patient, sometimes I don't want the AI to have my entire medical history, as this lets me consider things from different angles. For each chat, I give it the reconstructed history that I think is sufficient. I want it to be an explorer more than a doctor.

amber_raza•1mo ago
That is a fair critique. The frontier models are getting incredible at general reasoning.

The gap Evidex fills isn't 'Intelligence'. It is Provenance and Liability.

Strict Sourcing: Even advanced models can hallucinate a plausible-sounding study. Evidex constrains the model to answer only using the abstracts returned by the API. This reduces the risk of a 'creative' citation.

Explorer vs. Operator: You mentioned using AI as an 'explorer' (Patient use case). Doctors are usually 'operators'. They need to find the specific dosage or guideline quickly to close a chart.

I view this less as replacing Gemini/GPT. It is more of a 'Safety Wrapper' around them for a high-stakes environment.

OutOfHere•1mo ago
The problem is that doctors almost always, except perhaps in the emergency department, are currently too full of themselves, and are not open to reading relevant research unless a patient like me forces it upon the doctor. Maybe they are busy but that doesn't work for the patient. Even upon such forcing of the patient sharing research, the doctor will often read only a single line from an entire paper. How do you change this culture? It doesn't serve the patient too well to get an inaccurate root cause diagnosis from the doctors as I often do. It comes upon the patient to really spend the time investigating and testing hypotheses and theories, failing which the root causes go ignored, and one ends up taking too many unnecessary or even harmful pharmaceuticals.
amber_raza•1mo ago
I hear that frustration. The reality is that the 15-minute visit model leaves zero time for 'deep dives', which leads to the friction you described.

My hope is that by reducing the time it takes to verify a paper from 20 minutes to 30 seconds, we can make it easier for providers to actually engage with the research a patient brings in. It helps prevent them from dismissing it just because they 'don't have time to read it'.

OutOfHere•1mo ago
If possible, it eventually needs to become integrated into the clinician's existing workflow, to become a core part of it. As it stands, medical practice is in the dark ages by ignoring much of research in clinical practice.
amber_raza•1mo ago
100%. The 'Alt-Tab' tax is the biggest barrier to adoption. Starting as a 'second screen' is just step one; deep integration into the workflow is the eventual north star.
pdyc•1mo ago
I like your approach of "smart routing" but using regex/keywords based approach has a problem that it does not captures semantic similarity of keywords so search with similar intents are missed, how are you handling it? or you dont need to handle it since it is for domain experts and they are likely to search based on keywords(dictionary)?
amber_raza•1mo ago
You hit the nail on the head regarding the 'semantic gap'.

Currently, I handle this via Smart Routing. The engine analyzes the intent of your query (e.g. identifying if you’re looking for an RCT, a specific guideline, or drug dosing) and routes it to the most relevant clinical database using high-precision keyword matching.

I chose this deterministic approach for the launch to ensure clinical precision. While vector/semantic search is great for general concepts, it can sometimes surface 'similar-ish' papers that miss the specific medical nuances (like a specific ICD-10 code or dosage) required for clinical evidence.

The LLM (Gemini 2.5 Flash) currently lives in the Synthesis Layer. It takes the raw, high-precision results and synthesizes them into the clinical summaries you see.

I actually have LLM-based query expansion (translating natural language into robust MeSH/Boolean strings) built into the infrastructure, but I am keeping it in 'staging' right now. I want to ensure that as I bridge that semantic gap, I don't sacrifice the deterministic accuracy that medical professionals expect.

craigdalton•1mo ago
Excuse the blunt metaphor, but there is a risk here of turning on a fire-hose of "fresh" garbage. John Ioannidis, one of the doyens of evidence based medicine very persuasively argues - Why Most Published Research Findings Are False https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/ That is why platforms pay physicians/epidemiologists/ specialists in their field hundreds of dollars per hour to sort the good from bad papers. After my training as a doctor I did a Masters in Clinical Epidemiology and spent an afternoon each week in a tutorial that reviewed papers in the top journals - about 20-30% of them had major flaws that were either ignored or dismissed by the authors. It may be worse now. LLMs still have trouble picking up the subtleties of medical science and will miss papers with major flaws. I just did a test on a paper that is often quoted as providing evidence of excess cancer risk in communities living close to unconventional gas facilities. When I asked ChatGPT 5.2 to review the pape for evidence of increase cancer risk with a simple prompt it said the paper found such a risk. However, when I wrote a multi-discipline based prompt for 5.2 and Gemini 3 pro, it found the fatal flaw in the paper and advised it did not provide evidence. See the prompt and consider how the prompts would have to be individually developed for each paper and meta-analysis.

For review of meta-analysis you would need prompts developed by expert methodologists and discipline specialists- here is the prompt that worked: You are an environmental epidemiologist and exposure scientist, critially review this papers claim that the measured levels of unconventional gas emissions provide evidence of excess cancer risk: https://link.springer.com/article/10.1186/1476-069X-13-82

amber_raza•1mo ago
This is a fantastic critique. Spot on. Freshness without appraisal is just an accelerated firehose of noise.

1. The Garbage Filter: Right now, I rely on a strict Hierarchy of Evidence to mitigate this (prioritizing Cochrane/Meta-analyses over observational studies), but you are absolutely right that LLMs can miss fatal methodological flaws in a single, high-ranking paper.

2. The 'Critic' Agent: I’m currently experimenting with a secondary 'Critic' pass. This is an LLM agent specifically prompted to act as a skeptic/methodologist to flag limitations before the main synthesis happens.

3. Multi-discipline prompting: The prompt you provided is a great case study in persona-based auditing. I’d love to learn more about the specific 'disciplines' or archetypes you’ve found most effective at catching these flaws. That is exactly the kind of domain expertise I’m trying to encode into the system.

OutOfHere•1mo ago
I warn against prioritizing Cochrane. It will block essential information from surfacing. This holds science back for over a decade. The best way to make science emerge is to take peer-reviewed reviews and meta-analyses at face value. If a particular review is bad, it will soon be corrected by other reviews, so don't worry about it.
amber_raza•1mo ago
That is a fair distinction.

My default right now is Clinical Safety. I prioritize high-grade evidence to prevent harm at the bedside.

However, for Research/Discovery, you are absolutely right. Excessive 'Gatekeeping' can slow down innovation.

The long-term fix is likely a 'Filter Dial'. We need tight constraints for treatment decisions, but loose constraints for hypothesis generation. I plan to support both modes.

craigdalton•1mo ago
I really disagree with this and there is ample evidence that science is not "self-correcting". Read Retraction Watch. I personally wrote to a journal on 3 occassions and phoned them twice to alert them to an error in a paper that the authors were reluctant to own up to and correct. I had inside knowledge and was able to provide the evidence of the error. Journal did nothing, they passed the message on to a range of sub editors (which were a revolving door), no investigation, no response. Google the "reproduciblity crisis" including the coverage of the issue in Nature to see how uncorrecting medical science can be.

Regarding Cochrane. It is reliable if is says a treatment does work, or an exposure has an effect, sometimes they miss effects because they only rely on particular sources of evidence e.g. RCTs, they were wrong on effectiveness of masks. As an example of reasonably up to date and evidence based free review sources on line - see Stat Pearls.

OutOfHere•1mo ago
I fully understand that various articles, even peer-reviewed ones, can be bogus, and some reviews can be bogus too when they demonstrate an unfair bias in selecting articles. Journal managers too can be altogether apathetic. Even so, it has been my experience that reviews over the long term converge to the truth.

As for individual studies, if a study is important, it often gets tested by others, although sometimes it doesn't, and then it's a decision-theoretic play.

Cochrane in my estimation examines things from very narrow angles, and this can miss wide-ranging applicability to the real world.

craigdalton•1mo ago
The personas have to paper specific I believe, addressing the content and methods. I guess an LLM could do a once over of the paper or meta-analysis to determine the best discipline specific personas - but would be interesting to test that. But there are also the benefits of deep expertise and understanding a field for decades. For example, I know a set of authors who repeatedly find significant associations in a field in almost every study they do, whereas others have variable results. They also seem to ignore good studies that disagree with their hypotheses and use inferior studies that support their position in review papers - so I dont really trust their work. It would be great if an LLM could develop that kind of understanding and somehow deprecate a body of work that had inherent author or institutional biases - even though on the surface the review looks legitimate. For a meta-analysis it is often the papers that are omitted that are most telling. That means the LLM will need to redo the entire search and synthesis - yikes!
amber_raza•1mo ago
You just articulated the 'Holy Grail' of automated appraisal. Detecting bias across a career is a massive graph problem compared to checking a single paper. It essentially requires auditing an entire bibliography before synthesis.

I am adding 'Author Reputation/Bias Analysis' to the long-term roadmap. Thanks for the rigorous stress-test today.

craigdalton•1mo ago
How will you do this, one author I don't trust (sent them an error they missed in their paper - didnt correct it, has systemic bias in their writing) was invited to write a review article by the New England Journal of Medicine - has an excellent reputation for all the world to see.
amber_raza•1mo ago
You found the ultimate edge case. The 'Prestige Proxy' (NEJM = Truth) essentially masks that individual's actual track record.

While we might be able to detect 'Insular Citation Clusters' mathematically to flag systemic bias, no model can catch a private signal like an ignored email. It reinforces why the human expert is indispensable. The tool is a force multiplier for judgment, not a substitute.