During a group research project, one of my colleagues used ChatGPT to generate large portions of our literature review and passed it off as their original work—without citing the tool or verifying the content. I noticed inconsistencies in the writing style and some vague, unsupported claims, which raised concerns. When I asked them about it, they admitted to using ChatGPT but didn’t think it was an issue since “everyone uses it now.” I explained that while AI can be a helpful tool, transparency and proper attribution are essential to maintain research integrity. We brought the issue to our group adviser, who reminded us of the university’s guidelines on AI usage in academic work. As a result, the section was revised with proper citations and fact-checking, and our team agreed to be more cautious moving forward. The incident served as a valuable reminder that ethical use of AI means being honest about how it’s used and taking responsibility for the accuracy of the content it helps produce.