College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

  • UsernameIsTooLon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    You can still have AI write the paper and you copy it from text to paper. If anything, this will make AI harder to detect because it’s now AI + human error during the transferring process rather than straight copying and pasting for students.

    • Zacryon@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Noooo. That’s a genious countermeasure without any obvious drawbacks!!1! /s

  • ZytaZiouZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    The best part is there are hand writing generating programs or even web pages that convert text to gcode allowing you to use a 3d printer to write things out. In theory it should be really hard to pass it off as being human written, let alone match your own writing, but I’m sure it will only get better. I think there are even models to try to match someone’s writing.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Isn’t this kind of ableist? I remember when I was in school I had special accommodations to type instead of write, because I had wrists too weak to write legibly, but fingers fast enough to type expediently, they legitimately thought that I was a really stupid kid, until they realized that my spelling tests were not incorrect.

    They just couldn’t read that I had spelled it correctly. Somehow I wrote the word fly, and the teacher mistook my y for a v. I went from being the dumbest kid to the smartest kid as soon as the accommodation was put in place.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Can we just go back to calling this shit Algorithms and stop pretending its actually Artificial Intelligence?

    • WackyTabbacy42069@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It actually is artificial intelligence. What are you even arguing against man?

      Machine learning is a subset of AI and neural networks are a subset of machine learning. Saying an LLM (based on neutral networks for prediction) isn’t AI because you don’t like it is like saying rock and roll isn’t music

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        I am arguing against this marketing campaign, that’s what. Who decides what “AI” is and how did we come to decide what fits that title? The concept of AI has been around a long time, like since the Greeks, and it had always been the concept of a made-made man. In modern times, it’s been represented as a sci-fi fantasy of sentient androids. “AI” is a term with heavy association already cooked into it. That’s why calling it “AI” is just a way to make it sound high tech futuristic dreams-come-true. But a predictive text algorithm is hardly “intelligence”. It’s only being called that to make it sound profitable. Let’s stop calling it “AI” and start calling out their bullshit. This is just another crypto currency scam. It’s a concept that could theoretically work and be useful to society, but it is not being implemented in such a way that lives up to its name.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        If AI was ‘intelligent’, it wouldn’t have written me a set of instructions when I asked it how to inflate a foldable phone. Seriously, check my first post on Lemmy…

        https://lemmy.world/post/1963767

        An intelligent system would have stopped to say something like “I’m sorry, that doesn’t make any sense, but here are some related topics to help you”

        • WackyTabbacy42069@reddthat.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          AI doesn’t necessitate a machine even being capable of stringing the complex English language into a series of steps towards something pointless and unattainable. That in itself is remarkable, however naive it may be in believing you that a foldable phone can be inflated. You may be confusing AI for AGI, which is when the intelligence and reasoning level is at or slightly greater than humans.

          The only real requirement for AI is that a machine take actions in an intelligent manner. Web search engines, dynamic traffic lights, and Chess bots all qualify as AI, despite none of them being able to tell you rubbish in proper English

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            The only real requirement for AI is that a machine take actions in an intelligent manner.

            There’s the rub: defining “intelligent”.

            If you’re arguing that traffic lights should be called AI, then you and I might have more in common than we thought. We both believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.

            • throwsbooks@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              I think you’re conflating “intelligence” with “being smart”.

              Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.

              Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.

              I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.

              But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.

              • ParsnipWitch@feddit.de
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                Intelligence is more about taking in information and being able to make a decision based on that information.

                Where does that come from? A better gauge for intelligence is whether someone or something is able to resolve a problem that they did not encounter before. And arguably all current models completely suck at that.

                I also think the word “AI” is used quite a bit too liberal. It confuses people who have zero knowledge on the topic. And when an actual AI comes along we will have to make up a new word because “general artificial intelligence” won’t be distinctive enough for corporations to market their new giant leap in technology….

                • throwsbooks@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 year ago

                  I would suggest the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig. It’s a good overview of the field and has been in circulation since 1995. https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach

                  Here’s a photo, as an example of how this book approaches the topic, in that there’s an entire chapter on it with sections on four approaches, and that essentially even the researchers have been arguing about what intelligence is since the beginning.

                  But all of this has been under the umbrella of AI. Just because corporations have picked up on it, doesn’t invalidate the decades of work done by scientists in the name of AI.

                  My favourite way to think of it is this: people have forever argued whether or not animals are intelligent or even conscious. Is a cat intelligent? Mine can manipulate me, even if he can’t do math. Are ants intelligent? They use the same biomechanical constructs as humans, but at a simpler scale. What about bacteria? Are viruses alive?

                  If we can create an AI that fully simulates a cockroach, down to every firing neuron, does it mean it’s not AI just because it’s not simulating something more complex, like a mouse? Does it need to exceed a human to be considered AI?

  • MaggiWuerze@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    has led some college professors to reconsider their lesson plans for the upcoming fall semester.

    I’m sure they’ll write exams that actually require an actual understanding of the material rather than regurgitating the seminar PowerPoint presentations as accurately as possible…

    No? I’m shocked!

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Prof here - take a look at it from our side.

    Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

    I am not arguing exams are perfect mind, but I’d rather doubt a few student’s inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).

    Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Here’s a somewhat tangential counter, which I think some of the other replies are trying to touch on … why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

      In a world where something like AI can come up and change the landscape in a matter of a year or two … how much value is left in the idea of assessing people’s value through exams (and to be clear, I’m saying this as someone who’s done very well in exams in the past)?

      This isn’t to say that knowing things is bad or making sure people meet standards is bad etc. But rather, to question whether exams are fit for purpose as means of measuring what matters in a world where what’s relevant, valuable or even accurate can change pretty quickly compared to the timelines of ones life or education. Not long ago we were told that we won’t have calculators with us everywhere, and now we could have calculators embedded in our ears if wanted to. Analogously, learning and examination is probably being premised on the notion that we won’t be able to look things up all the time … when, as current AI, amongst other things, suggests, that won’t be true either.

      An exam assessment structure naturally leans toward memorisation and being drilled in a relatively narrow band of problem solving techniques,1 which are, IME, often crammed prior to the exam and often forgotten quite severely pretty soon afterward. So even presuming that things that students know during the exam are valuable, it is questionable whether the measurement of value provided by the exam is actually valuable. And once the value of that information is brought into question … you have to ask … what are we doing here?

      Which isn’t to say that there’s no value created in doing coursework and cramming for exams. Instead, given that a computer can now so easily augment our ability to do this assessment, you have to ask what education is for and whether it can become something better than what it is given what are supposed to be the generally lofty goals of education.

      In reality, I suspect (as many others do) that the core value of the assessment system is to simply provide a filter. It’s not so much what you’re being assessed on as much as your ability to pass the assessment that matters, in order to filter for a base level of ability for whatever professional activity the degree will lead to. Maybe there are better ways of doing this that aren’t so masked by other somewhat disingenuous goals?

      Beyond that there’s a raft of things the education system could emphasise more than exam based assessment. Long form problem solving and learning. Understanding things or concepts as deeply as possible and creatively exploring the problem space and its applications. Actually learn the actual scientific method in practice. Core and deep concepts, both in theory and application, rather than specific facts. Breadth over depth, in general. Actual civics and knowledge required to be a functioning member of the electorate.

      All of which are hard to assess, of course, which is really the main point of pushing back against your comment … maybe we’re approaching the point where the cost-benefit equation for practicable assessment is being tipped.


      1. In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.
      • Spike@feddit.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.

        You are getting some flak, but imho you are right. The only thing an exam really tests is how well you do in exams. Of course, educators dont want to hear that. But if you take a deep dive into (scientific) literature on the topic, the question “What are we actually measuring here?” is raised rightfully so.

        • maegul (he/they)@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Getting flak on social media, through downvotes, can often (though not always!) be a good thing … means you’re touching a nerve or something.

          On this point, I don’t think I’ve got any particularly valuable or novel insights, or even any good solutions … I’m mostly looking for a decent conversation around this issue. Unfortunately, I suspect, when you get everyone to work hard on something and give them prestigious certifications for succeeding at that something, and then do this for generations, it can be pretty hard to convince people to not assign some of their self-worth to the quality/value/meaning of that something and to then dismiss it as less valuable than previously thought. Possibly a factor in this conversation, which I say with empathy.


          Any links to some literature?

          • Spike@feddit.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Used only papers in german so far, sadly.

            Here is something I found interesting in english:

            Testing the test: Are exams measuring understanding? Brian K. Sato, Cynthia F. C. Hill, S. Lo Biochemistry and Molecular Biology Education

            in general: elicit.org

            really good site.

            • maegul (he/they)@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Hadn’t heard of that elicit cite … thanks! How have you found it? It makes sense that it exists already, but I hadn’t really thought about it (haven’t looked up papers recently but may soon).

              Also thanks for the paper!!

              • Spike@feddit.de
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Have found it relatively early after it was created, using it for getting a quick overview over papers when writing my own. It is sooo good for that.

    • Phoebe@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Sorry but it was never about OUR abilility in the firts place.

      In my country exams are old, outdated and often way to hard. In my country all classes are outdated and way to hard. It often feels that we are stucked in the middle of the 20th century.

      You have no change when you have a disability. When you have kids, parents to take care of. Or hell: you have to work, cause you can’t effort university otherwise.

      So i can totaly understand why students feel the need to use AI to survive that torture. I don’t feel sorry for an outdated university system.

      When it is about OUR abilility, then create a System that is for students and their needs.

    • Spike@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

      Could you ever though, when giving them work they had to do not in your physical presence? People had their friends, parents or ghostwriters do the work for them all the time. You should know that.

      This is not an AI problem, AI “just” made it far more widespread and easier to access.