frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: Is using AI tooling for a PhD literature review dishonest?

7•latand6•5h ago
I'm a PhD student in structural engineering. My dissertation topic is about using LLM agents in automating FEA calculations on common Ukrainian software that companies use. I'm writing my literature review now and I've vibecoded a personal local dashboard that helps me manage the literature review process.

I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON.

The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these.

I remember doing this all manually when I was doing my master's degree in the UK. That was a very terrible and tedious experience partially because I've ADHD

So my question is, is it dishonest?

Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct.

I really don't quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest.

Comments

love2read•3h ago
Someone against AI will tell you yes, someone for AI will tell you no. The only thing I can really say is that saying you have ADHD so you should have a reprieve from the normal rules is something that I don't agree with.
jimbooonooo•1h ago
I was diagnosed later in life with ADHD and struggled academically, but agree with this completely. Everybody faces difficulties in life, and ADHD doesn't justify constant exceptions. Your workplace will be far less accommodating, and you need to figure out how to adapt.

Using AI for literature review is a great tool, but I think the onus is on you to to both verify the output, AND disclose usage of said tool. Clearly describing your methodologies is it important skill for writing papers anyways.

QubridAI•3h ago
Not dishonest if you verify everything and understand it deeply but you should be transparent about your AI use since many universities care more about disclosure than the method itself.
adampunk•3h ago
I don’t know if it is dishonest. What I do know is that it will only save you time if you have a very specific and testable need. Otherwise it will appear to save time and produce something that you won’t be proud of.
fyredge•2h ago
Yes and no. The first thing to understand is that in academia, knowledge is the work. You are being trained to absorb existing knowledge, hypothesise new knowledge and test if it is valid.

LLMs are a useful tool if you want it to generate text. But in the context of research, this is quite dangerous. Think of a calculator that spits out the wrong answer 10% of the time, would you trust it to use in an exam? How about 5%? 1%? 0.1%? The business of research is the business of factual knowledge. Every piece of information should and is expected to be scrutinized. That's why dishonesty is severely looked down upon (falsifying data / plagiarism etc.)

I would say your use case is not dishonest, but I would also like you to think from the perspective of the university. How would they know if their students are using it honestly like you did? How can they, with their limited resource, make sure that research integrity is upheld in the face of automated hallucinations?

At the end of the day, the question is not what if using AI is dishonest, it's about being able to walk into an antagonistic panel and defend your claim that you understand the knowledge of your field (without live AI help). If you can do that and also make sure that the contents are not hallucinated, then I don't see why not.

Neosmith_amit•1h ago
No, I don't think it is dishonest.

At the same time I would recommend, document your methodology explicitly in the dissertation, describe the verification pipeline, and make it clear what you reviewed manually versus what was automated. That transparency converts "dishonest?" into "methodologically rigorous."

Here is the thing, academic policy is NOT really about honesty. It is about trust. Universities cannot distinguish your workflow from someone who prompted GPT to write their lit review wholesale.

More than the ethical distinction, I believe the rule around AI usage is blunt because enforcement is pretty hard.

bjourne•1h ago
You cannot copy others' work and claim it is your own. Thus, you cannot copy ChatGPT's work and claim it is your own. There is a qualitative difference between having an LLM generate text and having a program spell- and grammar check text. Since you are not going to highlight which passages in your article ChatGPT wrote for you and instead intend to pass it of as your own creative work it is dishonest. Very dishonest. If caught you will get in trouble and may be kicked out of your academic programme.
austinjp•1h ago
While your dashboard sounds fancy, this part raises issues:

> I run ChatGPT Pro to collect all relevant papers

Any literature review must be reproducible. If you can't say exactly what queries you ran against exactly what databases, you'll get into trouble. Whether or not that's the way things should be is irrelevant: it's the way things are.

You should ask your supervisor if your approach is okay. If necessary, ask it from a theoretical perspective: "would it be okay if I were to....?" If your supervisor is unavailable then seek advice from their colleagues.

Since you mention ADHD, you're likely to be strongly motivated by novelty. Don't spend time building a dashboard that you could spend on writing your thesis. If you're not getting support from your university, get it now. It might not help, but it's a signal to the university that you're engaging with the system.

BrenBarn•21m ago
> Any literature review must be reproducible.

That's totally at odds with my understanding, but perhaps this differs between fields.

malshe•18m ago
I don't think what you are doing is dishonest. But my opinion hardly matters.

My advice is to talk to your dissertation committee chair to understand whether they think it is dishonest. Furthermore, read your university's AI usage policies. If they don't consider what you are doing a permissible use of AI, no amount of assurance on HN or any online forum is gonna help you.

LLMs learn what programmers create, not how programmers work

21•noemit•6h ago•1 comments

Ask HN: Is using AI tooling for a PhD literature review dishonest?

7•latand6•5h ago•10 comments

Ask HN: Is anyone here also developing "perpetual AI psychosis" like Karpathy?

22•jawerty•7h ago•16 comments

Ask HN: AI productivity gains – do you fire devs or build better products?

102•Bleiglanz•1d ago•194 comments

Veevo Health – book a CT angiogram to see plaque buildup in your arteries

4•arvindsr33•6h ago•2 comments

Ask HN: If there has been no prompt injection, is it safe?

4•sayYayToLife•12h ago•5 comments

Ask HN: How many of you are profiting with LLM wrapper apps?

12•general_reveal•12h ago•1 comments

Ask HN: Are you using OpenClaw or similar agents? How?

4•nclin_•17h ago•6 comments

Tell HN: MS365 upgrade silently to 25 licenses, tried to charge me $1,035

22•davidstarkjava•1d ago•8 comments

Tell HN: H&R Block tax software installs a TLS backdoor

144•yifanlu•3d ago•12 comments

DietPi released a new version v10.2

2•StephanStS•10h ago•0 comments

Ask HN: Growth for me,is realizing how much I didn't know 6 months ago. Yours?

5•kathir05•18h ago•2 comments

What would you do if you have AI software that may be transformers alternative?

2•adinhitlore•1d ago•4 comments

Ask HN: How much are you spending on AI coding at work?

6•habosa•1d ago•7 comments

Spotify playing ads for paid subscribers

149•IncandescentGas•5d ago•127 comments

Anyone know how long it will take to re-start Qatar's helium plants?

9•megamike•1d ago•5 comments

Ask HN: How to get free/cheap Claude and AWS credits

4•jacAtSea•1d ago•6 comments

Ask HN: How do you handle peer-to-peer discovery on iOS without a server?

6•redgridtactical•1d ago•5 comments

SparkVSR: Video Super-Resolution You Can Control with Keyframes

2•steveharing1•1d ago•0 comments

Ask HN: what’s your favorite line in your Claude/agents.md files?

15•khasan222•2d ago•11 comments

Ask HN: What do you look for in your first 10 hires?

28•neilk17•4d ago•34 comments

Structural Friction: A metric for human coordination cost

6•davidvartanian•3d ago•0 comments

Ask HN: Is vibe coding a new mandatory job requirement?

38•newswangerd•6d ago•77 comments

Ask HN: How do you deal with people who trust LLMs?

153•basilikum•5d ago•202 comments

Ask HN: Why isn't the NSA categorized as an APT?

5•TheOpenSourcer•2d ago•9 comments

You've reached the end!