AI-assisted writing: where the defensible lines are
A practical framework for using LLMs in research writing without compromising authorship. A working test, not a legal document.
Academe
LLM-assisted writing is now common across faculty, postdocs, and graduate students. It is also, for most institutions and most journals, still a moving policy target. A single test catches most failure modes: could the author defend every claim in the manuscript orally, with the LLM turned off, if a reviewer asked?
If yes, the use is defensible. If no, the thinking has been offloaded, not the typing, and that is the problem.
A working framework follows.
Green zone: almost always defensible
These uses preserve authorship. The thinking is the author's; the LLM helps produce the artifact.
- Brainstorming. Generating 20 possible thesis statements, then picking and refining one.
- Outlining. Turning a scatter of ideas into a structured plan the author then executes.
- Copy editing. Fixing grammar, typos, and awkward sentences in words already written.
- Summarizing papers the author has read, to aid memory.
- Explaining concepts in plain language after the author has done the source reading.
- Drafting administrative writing (cover letters, schedule requests, non-graded memos).
Yellow zone: depends on the venue and the disclosure
These uses are increasingly accepted, but the discipline, the journal, the funder, or the supervisor sets the rule. Default to checking before submission, and disclose in any case.
- Generating a first draft that the author then rewrites. Most journals are fine; some are not. Nature, Science, and the major medical journals all publish explicit policies.
- Paraphrasing a passage the author would otherwise quote. The LLM's paraphrase is not the author's paraphrase; both attribution and voice are at stake.
- Translating between languages. Usually fine in research writing; less fine in language-acquisition assessments.
- Solving a worked problem and checking the result. Usually fine in scratch work; problematic in take-home exams or graded problem sets.
The right move in the yellow zone is to ask in advance and disclose what was used. A one-sentence note in the methods or the acknowledgments, naming what the LLM did and what the author did, signals integrity and lets reviewers make informed judgments.
Red zone: never defensible
These uses let the LLM do the load-bearing thinking. The line:
- Submitting a finished paper that the LLM drafted with no substantive editing.
- Getting answers to exam questions, take-home or otherwise.
- Submitting code for a coding-graded course that the author could not write themselves.
- Citing a passage, statistic, or source the LLM produced as if the author had read it. This fabricates evidence and is worse than plagiarism.
Disclosure, specifically
When a venue permits LLM assistance, the disclosure should be specific. A good one:
"An LLM was used for brainstorming and a first-pass draft of Section 2. All citations were verified against the original sources. The final argument and structure are the author's."
Compare:
"Some AI was used."
Specific disclosures protect the author. Vague ones invite suspicion and signal the author is uncertain about what was actually done.
Three-question integrity test
Before submitting any LLM-assisted writing, work through:
- If a reviewer asked me to explain any paragraph orally, could I?
- If the LLM is wrong somewhere in the manuscript, would I catch it?
- Have I read every source I cited?
A "no" to any of these means a line has been crossed and needs to be walked back. The fix is usually not "use less AI." It is "use it differently." Go back, understand what the LLM drafted, and either rewrite it in the author's own terms or remove it.
What a workspace can add
LLM-generated insertions in Academe are traced. AI-written content is marked in the editor with a visible style. Claims drawn from uploaded sources link back to the source passage. On export, the platform can generate a disclosure statement listing which sections had LLM assistance and what kind.
The goal is not to hide LLM use. It is to make the use defensible and to prove that it was.