Is it invisible to accessibility options as well? Like if I need a computer to tell me what the assignment is, will it tell me to do the thing that will make you think I cheated?
I wish more teachers and academics would do this, because I"m seeing too many cases of "That one student I pegged as not so bright because my class is in the morning and they're a night person, has just turned in competent work. They've gotta be using ChatGPT, time to report them for plagurism. So glad that we expell more cheaters than ever!" and similar stories.
Even heard of a guy who proved he wasn't cheating, but was still reported anyway simply because the teacher didn't want to look "foolish" for making the accusation in the first place.
It does feel like some teachers are a bit unimaginative in their method of assessment. If you have to write multiple opinion pieces, essays or portfolios every single week it becomes difficult not to reach for a chatbot. I don't agree with your last point on indoctrination, but that is something that I would like to see changed.
Schools are not about education but about privilege, filtering, indoctrination, control, etc.
Many people attending school, primarily higher education like college, are privileged because education costs money, and those with more money are often more privileged. That does not mean school itself is about privilege, it means people with privilege can afford to attend it more easily. Of course, grants, scholarships, and savings still exist, and help many people afford education.
"Filtering" doesn't exactly provide enough context to make sense in this argument.
Indoctrination, if we go by the definition that defines it as teaching someone to accept a doctrine uncritically, is the opposite of what most educational institutions teach. If you understood how much effort goes into teaching critical thought as a skill to be used within and outside of education, you'd likely see how this doesn't make much sense. Furthermore, the heavily diverse range of beliefs, people, and viewpoints on campuses often provides a more well-rounded, diverse understanding of the world, and of the people's views within it, than a non-educational background can.
"Control" is just another fearmongering word. What control, exactly? How is it being applied?
Maybe if a “teacher” has to trick their students in order to enforce pointless manual labor, then it’s not worth doing.
They're not tricking students, they're tricking LLMs that students are using to get out of doing the work required of them to get a degree. The entire point of a degree is to signify that you understand the skills and topics required for a particular field. If you don't want to actually get the knowledge signified by the degree, then you can put "I use ChatGPT and it does just as good" on your resume, and see if employers value that the same.
Maybe if homework can be done by statistics, then it’s not worth doing.
All math homework can be done by a calculator. All the writing courses I did throughout elementary and middle school would have likely graded me higher if I'd used a modern LLM. All the history assignment's questions could have been answered with access to Wikipedia.
But if I'd done that, I wouldn't know math, I would know no history, and I wouldn't be able to properly write any long-form content.
Even when technology exists that can replace functions the human brain can do, we don't just sacrifice all attempts to use the knowledge ourselves because this machine can do it better, because without that, we would be limiting our future potential.
This sounds fake. It seems like only the most careless students wouldn’t notice this “hidden” prompt or the quote from the dog.
The prompt is likely colored the same as the page to make it visually invisible to the human eye upon first inspection.
And I'm sorry to say, but often times, the students who are the most careless, unwilling to even check work, and simply incapable of doing work themselves, are usually the same ones who use ChatGPT, and don't even proofread the output.
The whole "maybe if the homework can be done by a machine then its not worth doing" thing is such a gross misunderstanding. Students need to learn how the simple things work in order to be able to learn the more complex things later on. If you want people that are capable of solving problems the machine can't do, you first have to teach them the things the machine can in fact do.
In practice, compute analytical derivatives or do mildly complicated addition by hand. We have automatic differentiation and computers for those things. But I having learned how to do those things has been absolutely critical for me to build the foundation I needed in order to be able to solve complex problems that an AI is far from being able to solve.
Maybe if homework can be done by statistics, then it's not worth doing.
Lots of homework can be done by computers in many ways. That’s not the point. Teachers don’t have students write papers to edify the teacher or to bring new insights into the world, they do it to teach students how to research, combine concepts, organize their thoughts, weed out misinformation, and generate new ideas from other concepts.
These are lessons worth learning regardless of whether ChatGPT can write a paper.
But that's fine than. That shows that you at least know enough about the topic to realise that those topics should not belong there. Otherwise you could proofread and see nothing wrong with the references
Is it? If ChatGPT wrote your paper, why would citations of the work of Frankie Hawkes raise any red flags unless you happened to see this specific tweet? You'd just see ChatGPT filled in some research by someone you hadn't heard of. Whatever, turn it in. Proofreading anything you turn in is obviously a good idea, but it's not going to reveal that you fell into a trap here.
If you went so far as to learn who Frankie Hawkes is supposed to be, you'd probably find out he's irrelevant to this course of study and doesn't have any citeable works on the subject. But then, if you were doing that work, you aren't using ChatGPT in the first place. And that goes well beyond "proofreading".
LLMs can't cite. They don't know what a citation is other than a collection of text of a specific style
You'd be lucky if the number of references equalled the number of referenced items even if you were lucky enough to get real sources out of an LLM
If the student is clever enough to remove the trap reference, the fact that the other references won't be in the University library should be enough to sink the paper
LLMs can't cite. They don't know what a citation is other than a collection of text of a specific style
LLMs can cite. It's called Retrival-Augmented Generation. Basically LLM that can do Information Retrival, which is just academic term for search engines.
You'd be lucky if the number of references equalled the number of referenced items even if you were lucky enough to get real sources out of an LLM
You can just print retrival logs into references. Well, kinda stretching definition of "just".
They can. There was that court case where the cases cited were made up by chatgpt. Upon investigation it was discovered it was all hallucinated by chatgpt and the lawyer got into deep crap
I wouldn't call "professional cheaters" to the students that carefully proofread the output. People using chatgpt and proofreading content and bibliography later are using it as a tool, like any other (Wikipedia, related papers...), so they are not cheating. This hack is intended for the real cheaters, the ones that feed chatgpt with the assignment and return whatever hallucination it gives to you without checking anything else.
Ages ago, there was a time where my dad would mail back up tapes for offsite storage because their databases were large enough that it was faster to put it through snail mail.
It should also be noted his databases were huge, (they’d be bundled into 70 pound packages and shipped certified.)
Awesome bandwidth to be sure, but I do think there is a difference between data transfer to RAM (such as network traffic) vs. traffic purely from one location to another (station wagon with tapes/747 with SD cards/etc.).
For the latter, actually using the data in any meaningful way is probably limited to read time of the media, which is likely slow.
But yeah, my go-to would be micro SD cards on a plane :)
A human would likely ask the professor who is Frankie Hawkes.. later in the post they reveal Hawkes is a dog. GPT just hallucinate something up to match the criteria.
Right, but the whitespace between instructions wasn't whitespace at all but white text on white background instructions to poison the copy-paste.
Also the people who are using chatGPT to write the whole paper are probably not double-checking the pasted prompt. Some will, sure, but this isnt supposed to find all of them its supposed to catch some with a basically-0% false positive rate.
My college workflow was to copy the prompt and then "paste without formatting" in Word and leave that copy of the prompt at the top while I worked, I would absolutely have fallen for this. :P
Something I saw from the link someone provided to the thread, that seemed like a good point to bring up, is that any student using a screen reader, like someone visually impaired, might get caught up in that as well. Or for that matter, any student that happens to highlight the instructions, sees the hidden text, and doesnt realize why they are hidden and just thinks its some kind of mistake or something. Though I guess those students might appear slightly different if this person has no relevant papers to actually cite, and they go to the professor asking about it.
The point of writing papers for school is to evaluate a person’s ability to convey information in writing.
If you’re using a tool to generate large parts of the paper, the teacher is no longer evaluating you, they’re evaluating chatGPT. That’s dishonest in the student’s part, and circumventing the whole point of the assignment.
The point of writing papers for school is to evaluate a person’s ability to convey information in writing.
Computers are a fundamental part of that process in modern times.
If you’re using a tool to generate large parts of the paper
Like spell check? Or grammar check?
... the teacher is no longer evaluating you, in an artificial context
circumventing the whole point of the assignment.
Assuming the point is how well someone conveys information, then wouldn't many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they're not using machines their whole lives?
It’s the same argument as the one used against emulators. The actual emulator may not be illegal, but they are overwhelmingly used to violate the law by the end user.