Skip Navigation

They AI Question

“What are the positives and negatives of using ChatGPT (and other AI) in post-secondary?”

This is a question I need to answer for an essay competition thing, and while I do have ideas myself and from my professor when I asked for his opinions, I was hoping if anyone here had some insights to add.

Is it ethical that I ask for your aid? I don’t want to overstep. I would not use anyone’s name/usernames at all in this essay, at most I will cite sources on the matter.

While I think my current ideas about the pros and cons are good (more cons than pros in my opinion) but I want to know if I missed anything.

If needed, I will add what ideas I’ve come up with so far but for now I’ll leave that out.

Edit: I was tempted to post this in the “Ask Lemmygrad” community but I think thats a more educational community about communism specifically so I’ll stick to asking here.

12
12 comments
  • for me the arguments would be mostly negative because:

    1. using it does not train your research skills
    2. using it does not train your creative and academic writing skills
    3. it is often just wrong when synthesizing text

    so to me those are major cons in an educational context some positives would perhaps be:

    1. it is useful as a phrase bank, as it can quickly give ideas on how to put words together.
    2. it is alright at giving direction when starting research, sort of like wikipedia

    thats all i can think of so far

    • Don't forget about the privacy and copyright concerns. Scraping the internet for training data, copyrighted or not, and also logging every input for this purpose (and probably others).

      A pretty significant con in my opinion.

    • These are all great, thank you for this!

  • My preferred way of thinking about these chatbots is that they're effectively just on-demand peers with quick Google skills to chat. Just like humans they can be confidently wrong a lot or have incomplete information or presentation, but also just like humans they can help you explore your ideas and give you quck insight.

    Besides all the technical cons (blatant disregard for copyright law and it being randomly racist sometimes), I don't think they're particularly bad. You just have to keep in mind that they're about as trustworthy as your local arrogant lab intern. Usually you're already required to source your claims on higher education work anyways.

    Main issue right now is that the current favourite implementation seems to be specifically trained to almost never admit to not knowing something.

    • Main issue right now is that the current favourite implementation seems to be specifically trained to almost never admit to not knowing something.

      Training data comes from Americans, so that makes sense.

    • Just like humans they can be confidently wrong a lot or have incomplete information or presentation, but also just like humans they can help you explore your ideas and give you quck insight.

      very very good stuff, thank you!

12 comments