I am a lecturer at a university. I strongly encourage anyone acused of cheating with AI to emphatically deny it. Even when I run detect AI, it is essentially my judgement. “AI detection” software is unreliable, it says stuff I wrote at undergrad well before ChatGPT was written by ChatGPT. Often it is just the lecturers opinion you used AI.
If a university punishes you despite you claiming you didn’t, I would recommend suing them – they will back down faster than you think
Universities need to begin to understand AI is here now and need to get over the idea it’s cheating to use AI. I set assignments that either can’t be exploited by AI use (and if they are, good on the student for being smarter than me) or I hope AI is used but many of my colleagues need to get with the times
I was actually thinking about this recently. People are going to be completing their degrees, yet have no actual knowledge about their field of work.
I wonder if universities will bringing back paper tests. The tests that you have to take IN class with invigilators supervising. I feel like that’s the only way to guarantee with certainty that there’s no AI being used… (I say this as a university student who’s definitely taken guidance from ChatGPT before)
I think AI is a lot less worse than people just paying for others services to do their essays for them.
Which has been going on for a very long time.
I’d be interested to know how/why students are using ChatGPT (or similar) in assignments.
My uni days finished long before ChatGPT was a thing, but I have a real bug-bear with AI (see my post – not comment – history for some astounding stupidity and belligerence from the AI fanboys) and can’t really work out what it could help with.
For anything science/maths/engineering based it gets basic facts wrong pretty much as a constant rule. So even if the person marking it had zero clue you were using AI you’d fail anyway for just getting the basics wrong.
For anything creative writing wise, it writes appallingly so much above year 9/GCSE level, again, you’d fail regardless even if you weren’t rumbled for using AI. You’d simply fail for having poor writing standards and as is a major issue with AI, ignoring the question.
The article is very scant on the details, no mention of what course this was, so I’m struggling to work out how you’d use it and not create *more* work for yourself along the way.
As u/Fox_9810 says though, AI detection software is generally as bad as AI itself – i.e fucking useless. If you outright deny it then you should be fine, and a small correction on his suggestion to sue – don’t bother go that far, simply threaten it. That will be enough. No need to pay for a solicitor or anything.
A real concern for me is that people are so ignorant on the limitations of AI that students will be unfairly hauled over the coals when they haven’t even used it.
I think there’s a place for AI chat bots in assignment writing.
For example if looking for a concept to be explained in other language, if looking for ‘sources that support x’ etc etc.
In the future, Kids who dont know how to use ai will be like adults who dont know how to use the internet.
The problem is not using ai, but kids just type the question there and copy directly from the ai’s answer.
It is, and I cannot stress this enough, functionally impossible for current universities to detect AI assignments unless you’ve done it really blatantly.
Also, they are actually not meant to be using “plagiarism detection software” to be initiating these academic misconduct sessions at all.
I’m not in favour of chatGPT per say, but, I think one advantage is, it is like a teacher that never gets annoyed at you asking it questions till you eventually understand.
I remember many many moons ago in the late 90’s when I was in college doing a computer course, because it was a BTEC course it involved lots of crap we didn’t really need to learn like computer programming.
At the time is was pascal programming and the theory side was JSP (Jackson Structured Programming).
None of us could get our heads around it, so the tutor who I have to give total respect to, he redid all his notes and actually taught us it in reverse, he taught us all the diagrams and everything first then the theory, and it didn’t matter how many questions we had or how long he had to spend with us he did it to make sure we passed.
Most tutors would not of done that for us.
I think that is where something like ChatGTP could really come into it’s own and be an extremely useful tool, sadly like always many take short cuts and use it plus other tools for cheating.
I think one way many universities could maybe figure out those who have cheated is to change the exam in a way that the students have to maybe spend an hour at start of an exam researching something of a similar nature to what they’ve been presented in class work and then spend another 2 hours or more typing up info about it without internet access, if they’ve not cheated then it will be in a similar format to what they’ve done previous.
If they have cheated then it won’t be anywhere near as good and probably nothing like they’ve previously done.
But, are universities really caring if they cheat or not as long as the grades are good so they can keep charging more money to future students?
From my experience of writing essays using chat gpt, if you were to just copy and paste a 1000 word essay from chat gpt, it’s painfully obvious and you deserve to get caught.
If however you use it to give you ideas and a rough guide you won’t get caught. Personally I wouldn’t even say it’s cheating.
The **only** saving grace for Hannah is that it was in her first year, and that doesn’t really count to your degree that much, traditionally.
>You’ve got to embrace it. You can ask it questions and it helps you out.
I’d *hope* that students would know better than this, just like knowing better than taking a couple of google hits, but it would appear not.
For more than the most basic stuff, current AI is **really** bad at giving an accurate, reliable answer, and **really good** at being confidently wrong.
They are trained on random google results and fucking reddit for crying out loud.
Currently being falsely accused of using AI for my Bachelors – it’s “detected” stuff that there’s no way AI could have written because it’s not in the public domain, staff don’t understand AI or how the detection software works, this tool needs an entire overhaul.
Interesting topic. ChatGPT is at this moment a very basic use of A.I and still very much in its infancy – it has its flaws and has a long way to go but it will no doubt improve quite dramatically in the future.
My partner is a teacher at an NHS training college. When her students use A.I it is, in her description, obvious because of certain errors she and her colleagues notice. She is adamant allowing the use of A.I in student nursing etc is potentially dangerous since the skills required have to be understood fully and they are finding students whose first language is not English increasingly using it as a cheat code.
Personally, I am invested in A.I – literally, as an Equities investor who invests in and follows Nvidia, Palantir and many other leaders in tech paving the way to our future.
The rhetoric is clear from the likes of Jensen Huang that there are many marvels to come but that we are very much at the beginning.
Does a digital artist who uses AI to polish their light sources “cheat”?
No.
It’s a modern version of an artist using a stencil – A tool or technique that helps develop quality. Academia just needs to adapt.
I work in IT in education. Kids are already using AI to complete their homework. All the time.
A weird shift is coming. You’re gonna end up with people who are skilled at interrogating LMMs and getting them to generate information and content with minimal knowledge of subject matter themselves.
It’s a weird time.
Its also used in interviews. We have asked questions in a live interview, and you can tell the person has quickly typed that into AI and just read the response
– reference formatting (this alone is a huge time saver)
– scaffolding / plan review (how can I make this flow better, headings etc)
– mark my essay based on the rubric
– reference materiel suggestions (yes it can find real sources and link them)
These things already offer a huge advantage over when I started higher education and make it faster. If you’re copy pasting, you’re the sort that would’ve paid / plagiarised a ‘friends’ report (also happened to me – he’s still not employed 10 years on despite getting the degree)
Punishment of children for using tools to complete their tasks would be akin to the Chinese or children from India being whipped for using an abacus.
This older Reddit post includes some interesting options
19 comments
I am a lecturer at a university. I strongly encourage anyone acused of cheating with AI to emphatically deny it. Even when I run detect AI, it is essentially my judgement. “AI detection” software is unreliable, it says stuff I wrote at undergrad well before ChatGPT was written by ChatGPT. Often it is just the lecturers opinion you used AI.
If a university punishes you despite you claiming you didn’t, I would recommend suing them – they will back down faster than you think
Universities need to begin to understand AI is here now and need to get over the idea it’s cheating to use AI. I set assignments that either can’t be exploited by AI use (and if they are, good on the student for being smarter than me) or I hope AI is used but many of my colleagues need to get with the times
I was actually thinking about this recently. People are going to be completing their degrees, yet have no actual knowledge about their field of work.
I wonder if universities will bringing back paper tests. The tests that you have to take IN class with invigilators supervising. I feel like that’s the only way to guarantee with certainty that there’s no AI being used… (I say this as a university student who’s definitely taken guidance from ChatGPT before)
I think AI is a lot less worse than people just paying for others services to do their essays for them.
Which has been going on for a very long time.
I’d be interested to know how/why students are using ChatGPT (or similar) in assignments.
My uni days finished long before ChatGPT was a thing, but I have a real bug-bear with AI (see my post – not comment – history for some astounding stupidity and belligerence from the AI fanboys) and can’t really work out what it could help with.
For anything science/maths/engineering based it gets basic facts wrong pretty much as a constant rule. So even if the person marking it had zero clue you were using AI you’d fail anyway for just getting the basics wrong.
For anything creative writing wise, it writes appallingly so much above year 9/GCSE level, again, you’d fail regardless even if you weren’t rumbled for using AI. You’d simply fail for having poor writing standards and as is a major issue with AI, ignoring the question.
The article is very scant on the details, no mention of what course this was, so I’m struggling to work out how you’d use it and not create *more* work for yourself along the way.
As u/Fox_9810 says though, AI detection software is generally as bad as AI itself – i.e fucking useless. If you outright deny it then you should be fine, and a small correction on his suggestion to sue – don’t bother go that far, simply threaten it. That will be enough. No need to pay for a solicitor or anything.
A real concern for me is that people are so ignorant on the limitations of AI that students will be unfairly hauled over the coals when they haven’t even used it.
I think there’s a place for AI chat bots in assignment writing.
For example if looking for a concept to be explained in other language, if looking for ‘sources that support x’ etc etc.
In the future, Kids who dont know how to use ai will be like adults who dont know how to use the internet.
The problem is not using ai, but kids just type the question there and copy directly from the ai’s answer.
It is, and I cannot stress this enough, functionally impossible for current universities to detect AI assignments unless you’ve done it really blatantly.
Also, they are actually not meant to be using “plagiarism detection software” to be initiating these academic misconduct sessions at all.
I’m not in favour of chatGPT per say, but, I think one advantage is, it is like a teacher that never gets annoyed at you asking it questions till you eventually understand.
I remember many many moons ago in the late 90’s when I was in college doing a computer course, because it was a BTEC course it involved lots of crap we didn’t really need to learn like computer programming.
At the time is was pascal programming and the theory side was JSP (Jackson Structured Programming).
None of us could get our heads around it, so the tutor who I have to give total respect to, he redid all his notes and actually taught us it in reverse, he taught us all the diagrams and everything first then the theory, and it didn’t matter how many questions we had or how long he had to spend with us he did it to make sure we passed.
Most tutors would not of done that for us.
I think that is where something like ChatGTP could really come into it’s own and be an extremely useful tool, sadly like always many take short cuts and use it plus other tools for cheating.
I think one way many universities could maybe figure out those who have cheated is to change the exam in a way that the students have to maybe spend an hour at start of an exam researching something of a similar nature to what they’ve been presented in class work and then spend another 2 hours or more typing up info about it without internet access, if they’ve not cheated then it will be in a similar format to what they’ve done previous.
If they have cheated then it won’t be anywhere near as good and probably nothing like they’ve previously done.
But, are universities really caring if they cheat or not as long as the grades are good so they can keep charging more money to future students?
From my experience of writing essays using chat gpt, if you were to just copy and paste a 1000 word essay from chat gpt, it’s painfully obvious and you deserve to get caught.
If however you use it to give you ideas and a rough guide you won’t get caught. Personally I wouldn’t even say it’s cheating.
The **only** saving grace for Hannah is that it was in her first year, and that doesn’t really count to your degree that much, traditionally.
>You’ve got to embrace it. You can ask it questions and it helps you out.
I’d *hope* that students would know better than this, just like knowing better than taking a couple of google hits, but it would appear not.
For more than the most basic stuff, current AI is **really** bad at giving an accurate, reliable answer, and **really good** at being confidently wrong.
They are trained on random google results and fucking reddit for crying out loud.
Currently being falsely accused of using AI for my Bachelors – it’s “detected” stuff that there’s no way AI could have written because it’s not in the public domain, staff don’t understand AI or how the detection software works, this tool needs an entire overhaul.
Interesting topic. ChatGPT is at this moment a very basic use of A.I and still very much in its infancy – it has its flaws and has a long way to go but it will no doubt improve quite dramatically in the future.
My partner is a teacher at an NHS training college. When her students use A.I it is, in her description, obvious because of certain errors she and her colleagues notice. She is adamant allowing the use of A.I in student nursing etc is potentially dangerous since the skills required have to be understood fully and they are finding students whose first language is not English increasingly using it as a cheat code.
Personally, I am invested in A.I – literally, as an Equities investor who invests in and follows Nvidia, Palantir and many other leaders in tech paving the way to our future.
The rhetoric is clear from the likes of Jensen Huang that there are many marvels to come but that we are very much at the beginning.
Does a digital artist who uses AI to polish their light sources “cheat”?
No.
It’s a modern version of an artist using a stencil – A tool or technique that helps develop quality. Academia just needs to adapt.
I work in IT in education. Kids are already using AI to complete their homework. All the time.
A weird shift is coming. You’re gonna end up with people who are skilled at interrogating LMMs and getting them to generate information and content with minimal knowledge of subject matter themselves.
It’s a weird time.
Its also used in interviews. We have asked questions in a live interview, and you can tell the person has quickly typed that into AI and just read the response
I put this in a reply but it is such fun that I’d like more people to see it: https://www.bailii.org/uk/cases/UKFTT/TC/2023/TC09010.html
Uses of AI for uni coursework that are harmless:
– reference formatting (this alone is a huge time saver)
– scaffolding / plan review (how can I make this flow better, headings etc)
– mark my essay based on the rubric
– reference materiel suggestions (yes it can find real sources and link them)
These things already offer a huge advantage over when I started higher education and make it faster. If you’re copy pasting, you’re the sort that would’ve paid / plagiarised a ‘friends’ report (also happened to me – he’s still not employed 10 years on despite getting the degree)
Punishment of children for using tools to complete their tasks would be akin to the Chinese or children from India being whipped for using an abacus.
This older Reddit post includes some interesting options
https://www.reddit.com/r/Professors/s/QBFsmRxlgw
Comments are closed.