Educating AI

Educating AI
Crossed Wires
Educating AI

Apr 15 2024 | 01:01:51

/
Episode 75 April 15, 2024 01:01:51

Hosted By

James Bilsbrough Jae Bloom

Show Notes

The hype around AI continues to grow, often muddling genuinely useful elements like machine learning, pattern recognition, and properly and ethnically trained large language models with tools like ChatGPT and generative AI tools that have questionable data sources.

With returning guests, Prof. Catherine Flick and our very own Zack, we talk about the challenges of AI in the academic space. Zack also shares his own experience of being caught using ChatGPT to complete a final essay he, by his own admission, had left until the last minute.

Are you academia, have you been tempted to use AI shortcuts or maybe are a teacher or professor who has had experience with students taking those shortcuts? We'd love to hear from you, so please send us a note to [email protected], or why not come join the discussion on our Discord server.

If you liked this episode or any of our content, we’d greatly appreciate any little bit of support you can throw our way over at our Ko-Fi page.

Affiliate Promotion

"Don't you people backup? I backup, and I don't even know what backing up means!" - Nicola Murray, Secretary of State for the Department of Social Affairs and Citizenship

If you have any kind of file that's important to you, be that a treasured family photo, your latest research paper, or just the list of the co-ordinates of your best places in Minecraft, you'll want to make sure it's kept safe, right? Well, just syncing that to the cloud isn't really enough, you need a proper backup strategy too.

Part of a good backup strategy is having a backup that isn't in the same place as your computer, and this is where a good cloud backup service is so important. Our friends at Backblaze provide simple, reliable, and affordable backup options for your Mac or Windows PCs for just $9/month. You can get a 15 day free-trial when you follow this link to sign up.

Episode Links

Chapter Times

  1. 00:00:04: Introductions
  2. 00:05:11: Zack’s Story
  3. 00:14:55: Assistive Tools vs AI
  4. 00:24:38: Due Diligence & Alternative Assessments 
  5. 00:45:33: Generative AI 
  6. 00:51:45: AI Hype vs Machine Learning 
  7. 00:55:00: Wrapping Up

Credits

Intro and outro theme: Ace of Clubs by RoccoW

View Full Transcript

Episode Transcript

[00:00:05] Speaker A: Well, hello, everyone, and welcome back to Crosswise V Technological. I keep doing that line technology variety show that covers all sorts of really cool things and some interesting topics as well. And this week we are back to discuss AI and in particular, AI in education. Now, we've got a wonderful panel, but before we introduce our guests, of course, I'm joined by my beautiful co host. Hello, Jay. [00:00:32] Speaker B: Hey, how are you doing? James? I don't know why I'm talking like William Shatner. I am here. [00:00:41] Speaker A: That was, that was two Star Trek fans on this call and two, two major Star Trek fans. I think we, we need to ask our next guest. Returning guest. In fact, both of our guests are returning guests. But welcome back to a show, our nephew Zach. Hello. [00:00:57] Speaker C: Hello, everyone. [00:00:59] Speaker A: What, what did you think to Jay's William Shatner? [00:01:01] Speaker C: I mean, beat me up, Scotty. There's no intelligent life on this planet. Oh. [00:01:10] Speaker A: Right. Thanks, Zach. And the esteemed returning guest, Professor Catherine Flick. Hello. [00:01:20] Speaker D: Hi. Yes, I am esteemed now. I was less esteemed before and now I am more esteemed. That is good. [00:01:27] Speaker A: Have you, now, I've got to ask you, have you changed all of. Because my friend, when he got his PhD and got his doctorate, first thing he did before he did anything else was go and change all his debit cards, everything. Have you had that honor or are you still. [00:01:42] Speaker D: No, I haven't. I haven't done that because it's actually a real pain in the butt to do that, so I could not be bothered. Also, something like a professoriate is something that you can. Well, it's more easily taken away. I mean, if you change jobs, it doesn't go with you between jobs, but a doctorate stays with you forever. [00:02:04] Speaker B: I had never known that. So, yeah, so you'll always be. You always be the doctor? Doctor who? Doctor Catherine flick. [00:02:11] Speaker D: Yes, but yes, if I change jobs, we're really going through the Sci-Fi catalogue tonight, aren't we? [00:02:19] Speaker A: You've been on this show before. You know the deal. Anyway, so look, we are here to talk. I've entitled this episode educating AI. Can I educate and read about a bit more techy? And obviously Professor Flake is AI ethicist. Is that the correct. [00:02:40] Speaker D: No, I'm not an AI ethicist. I'm a technology ethicist, which is a little bit more broad. I've been doing this sort of thing for a lot longer than AI is. Well, this modern era of AI, anyway, has been around. So, yeah, basically I consider myself more of a generalist. I can talk a lot about Ala because it's been around for while, and it's a lot of. It has the same sort of issue. I think it has many of the same sort of issues as other technologies. It's just that this has got. Well, it's the current kind of hype cycle behind it, and it's also being used in much more invasive ways, and that have a much bigger impact on the people who both use and are, you know, indirectly impacted by it as well. Right. So, yeah, so I think there's. I mean, definitely a lot of space for AI ethics. There are a lot of people who do. Who very much concentrate on this and are probably a lot better than I am at it. But I'm actually. My theoretical specialty now is video games. I'm a professor of ethics and games technology. But that's a little bit of a weasel around because games technology, I think, is pretty much all technology that's cutting edge, kind of goes through games at some point. That's how I kind of work it at the moment. [00:04:09] Speaker A: Awesome. I mean, that's a great thing to do. I mean, I gotta ask you one quick question. Does that mean you can legitimately play computer games at work? [00:04:19] Speaker D: Yeah, I actually do. And I may. I've been. I've been. I've been playing, well, not. Not very fun games, to be honest. I've been setting up or trying to set up some. Looking at some of some mobile games that have got some problematic dark pattern type monetization things, and I've been playing some of those. So social casino games, for example, which I'm not a huge fan of, so. Yes, but I can legitimately play games at work. I'm actually making games at work as well. I've been playing around with Unreal recently, learning how to use it, because despite being a video games person now, I've not actually ever made a 3d game. I've done narrative, you know, text based games, but that's about it. So, yes, I've been. I've been learning how to use unreal, and that's kind of fun as well. So lots of creative stuff, which I haven't been able to do in my previous positions, so that's nice. [00:05:11] Speaker A: So you mentioned, Catherine, the impact that AI can have on people. And one of the reasons we got Zack here is because Zack has. And Zack's smiling. You can see the smile on his face right now because. So Zack has a bit of an interesting story, and I think it really ties in well into what we're talking about and the impact of AI. So, Zach, obviously, no one on the call is going to judge you. And of course, Jay, what's our rule for our listeners when we're commenting? [00:05:39] Speaker B: Always be respectful and be kind and also follow all the crosswires. Community rules of engagement. [00:05:47] Speaker A: Awesome. Zack, in your own words, do you want to tell your story? I mean, we can play the sad violin for you if you want, but tell your story because I think people will find this genuinely interesting. In particular, people maybe went to high school in the same years that I went to high school, which was a long time ago. [00:06:08] Speaker C: So I went to high school, then finished high school during the COVID era where my motivation to actually finish work went down and the motivation to finish it for me went up. So I didn't like the fact that I did this. But sometimes if I needed to complete it assignment, or if there was like a point, say, for I was cramping for an essay, like I needed to cram it all in to get it done, I would throw it into a text generator and it would come out and I would submit that. And me being the not as smart person as I am today, did not know that there were things that were AI detectors. And of course, when the teacher responds back to you with, why does this text have 90% AI written? Which of course we know those detectors are only semi accurate. But still, I struggled when I should have been doing my own work and instead relied on the AI to do my work for me. [00:07:27] Speaker A: And I mean, you nearly. I mean, I think we have to just address, not address this, but sort of emphasize the consequences. I mean, you now, to be clear, Zach has graduated high school. He. I know because I spent some time helping and talking to Zack. Zack put the effort in and got everything redone and actually graduated. But there was a moment, wasn't there, that you actually were not going to be allowed to graduate. [00:07:54] Speaker C: So what happened is I had one required course to complete for my high school diploma. I had an essay that was needing to be done, and a teacher I keep on calling a professor because of college stuff, but it's a teacher, it's high school level. He said, I can't take any of this work. This essay was AI. So he refused the work. He said, no, I'm sorry, I cannot have you do this. The second to date, second to last day, before they walked, I got the notification saying, no, you need to do your summer work. You need to do an online course. Now, luckily, I was able to do that. I was able to get everything I need to go squared away, and I was able to finish out the year on June 20 or, no, July 7 or something like that. In July, I was able to pick up my diploma from the high school. But still, falling into that pitfall, falling into the easy, the low hanging fruit of the procrastinating kid did not help and is not fun experience. [00:09:02] Speaker B: And I just want to add in that to anybody that thinks this isn't common. I was actually flying to England in August, and the person next to me was asking for access to the Wi Fi so they could do Chazdbd for their essay. And I wanted to tell them and screamed at it to be like, no, what dare you doing? But it is a lot more commonplace than we might realize, and it's such an easy temptation. [00:09:35] Speaker A: And that brings us into kind of this rise of AI. Now, Catherine, from your point of view, is that something that you've seen in, you know, maybe. Obviously your level is more universal. Have you seen this happening with graduate and postgrad? [00:09:54] Speaker D: Yes, actually, with both, which is a little. Especially with postgrad, it's a little concerning. I mean, you know, we kind of. Well, we don't expect, but we aren't surprised when undergraduate students sort of try to, you know, get around things. Or if they're like, well, let's. Let me just back up a little bit because it's not so much usually about the student, the students, you know, being bad or whatever. Right. And I want to really clarify that, you know, university is quite stressful. There's a lot of things that you need to balance. A lot of people are coming into university, especially the universities I've been working at. They're the first person in their family to go to university at all. So they have no expectations about. They have no real understanding about what the work entails. For example, they may have caring responsibilities, they may have mental health issues, they may have disabilities, or they may be on, you know, they may have extra requirements that they need in order to kind of deal with work, with work that they haven't kind of sorted out yet. For example, we have a lot of undiagnosed autism spectrum people that come to university, and it's not really until then that they discover that that might be an issue for them. So I don't want to say that it's because students are lazy. I don't want to say it's because students are bad students. There's usually a reason behind it. Right. And I mean, you know, sometimes it is just pure procrastination. But maybe, you know, in many cases, certainly the cases that I've looked at a lot of the students who panic at the last minute are panicking because they haven't been able to. They've had other responsibilities that have meant they haven't been able to get things done in time and they've left it to the last minute because they don't have that understanding about how university, you know, kind of works and that you have to kind of be consistent across a long period of time and trying to crank out a 5000 word essay in a day just doesn't happen. Right. And so, yes, so, you know, we. It's not that we expect it to happen, but we're not surprised when that does happen. Right? And it used to be the essay mills, it used to be, well, plagiaris, straight up plagiarism, copying and pasting out of. Out of other papers or Wikipedia or all sorts of places, you know, blogs and things like that. Right? That's how it used to be done. But these days now, you know, students are using large language models. Right? And, yeah, so we do see it and I've seen it at both undergraduate and postgraduate level. I've seen it for different kinds of assessments, mostly essay type or report type assessments. And so far, I mean, this, I mean, these. This is the only time I've actually like. The fact that I've seen it means that I've caught it. And I'm sure there are probably instances that I haven't caught as well because, know, obviously the models are getting better, they're getting more kind of. They're shedding their kind of chat GPT feel, if that makes sense. Like chat GPT has a very specific cadence and a rhythm to how it writes. It's changing every. Well, not every day, but it's rapidly changing. Right. The style, the complexity with which these things can write. So, yeah, I mean it's. Yeah, there are lots of tells that you can use to kind of pick up on things like, you know, inadequate references, made up references, no references and these. But the thing is, is that usually, certainly within the UK, you can't just say, oh, I think it's AI, you have to kind of prove it. And the burden of proof is very, very firmly on the lecturer. On the lecturer or on the tutor to kind of prove that it. That the student has cheated. And so it is very, very difficult to kind of catch people out unless they essentially confess to having done it. Right. But, yeah, I mean, there are other ways that we kind of deal with it if we strongly suspect that there's been, you know, I mean, the thing is, is that these models write really badly for the most part. They write very poor essays. They write mediocre at best, essentially. And so, yeah, I mean, there are things that, that as assessors we can do to kind of pick up on that in terms of just the fact that it's just bad writing. But then they're also, what we're trying to do now is to kind of move away from the classical written assessments to take that into consideration. Right. So that we need to make sure that the student is producing the work because ultimately it's the student's best interest for them to do that. Right. I mean, there's no point in getting a degree if you come out the other end and you go into work and you've got no idea what you're doing. So it's in your best interest as a student to actually do the work. So, yeah, I mean, I guess that's kind of the intro perspective that I bring to this. Right. [00:14:55] Speaker A: And there's a question I want to quickly ask because when I was thinking about this, we've used tools like Grammarly, for example. Grammarly, I know they're doing AI stuff, but that is more your traditional spell check. Is that fair? And do you expect to see people using Grammarly more? [00:15:17] Speaker D: Right. So, I mean, my previous institution, I'm not sure about my current one, but my previous institution actually had an institutional account for Grammarly, and so they actually actively encourage students to use it. So this was, I mean, there's, I think there's a very, this is where the complexity of these, these models kind of like, and these implementations of these models kind of comes into play, right? So you have kind of the, you know, the assistance and that instilling requires you to actually do the work, but the model is assisting you to do the work. Right. So, I mean, I mean, there are all sorts of, quote, I use AI here. Like, I really want to be careful how I say so. I'm talking about large language models. I don't want to, I don't like the general word AI because I think it buys, buys into the hype that these companies are all kind of, you know, making lots of money off. But largely machine learning has been around for a long time. There are lots and lots of different models that do different sorts of things, and some of them are very specifically focused. So, I mean, things like natural language processing has been around for a really long time now, and that's the precursor to your modern large language models. And then there's things like very specific models that do, like, one job really, really well. So, for example, like, predictive models that, like, I mean, things like spell checks, right? I mean, if you, I mean, and things like predictive text are actually really, like, they're actually kind of old, quite old school machine learning models, and some of them don't actually use machine learning at all. But anyway, we're sort of in the history, back in the history of the weeds of this stuff, right? But the point is that we've been living with these sorts of assistants for a really long time. And so the things like predictive text you wouldn't really consider to be able. Right, but it is. I mean, it's the same sort of mechanism that these models use. It just predicts. It predicts the best next word, right? And so that's why when I see the people sort of hyping up what AI can quote in their words, what AI can do, right, it can replace people. It can do the boring narrative work. It can, like, I don't know, whatever it is that, that they're saying it can do. I mean, yes, it can, but it doesn't do it very well. It just does it to a very mediocre kind of standard, right? So I think things like Grammarly and those sorts of spell checking type, you know, grammar and spell checkers, they're actually really, really helpful. They've been really good for students. And we've had a lot of, I mean, I've taught a lot of students where English is their second language. I've taught a lot of dyslexic students who, you know, and they really love these sorts of tools because it just helps them get on with actually the whole point of university, and certainly learning, in my opinion, is not about whether you write a perfect sentence. It's about whether you can convey your ideas, your creativity, your ideas, your reasoning, your argument. Can you do that? You don't have to do that in a grammatically perfect way. But there's also a certain standard of academic writing that is expected. And so this helps people who struggle with that to kind of meet those standards, but it also is a teaching tool, right? So they learn what looks over time, they probably need it less and less, right, because they're getting used to the style, they're getting used to the writing. But unless they're doing that kind of the practice itself, if they're leaving it all to a large language model to just churn out a thousand word essay for them, they're not actually learning anything from, from that process. And so there's a difference, I think, between those types of tools. [00:19:05] Speaker B: Yeah, because I think that AI, as you said, gets mislabeled on everything. Because I remember, like, James and I were talking about a product and it said AI, he's like, we won't use this. I'm like, it's not like the NFT thing where if anything has an NFT, then you know what they're going after. This is more of a, everything's lumped into one. And I think part of it's like, what's the tool doing? Where is it getting its information from? And stuff like that? Whereas taking students information, because I know one of the big examples I've seen, I think the verge and different ones have had talked about is from the great Gatsby where they're looking out onto the ocean and they see the, the green light and the AI talked about what it means and all that. They said the issue with that is the student should have been the one to talk about. What does that mean? Not the machine. Now, you had mentioned on the last episode a thing like the peanut butter example, the peanut butter VCR. How are the large language models doing on understanding these concepts? You want to see, are they very clumsy or are they mimicking what you were looking for? [00:20:33] Speaker D: Can you refresh my memory about the peanut butter VCR thing? [00:20:36] Speaker B: Because I say a lot of stuff. [00:20:38] Speaker D: And I don't always remember. [00:20:40] Speaker B: So you had mentioned someone asking about how to get a peanut butter out of, out of a VCR in the style of the King James, but it didn't get understand the actual concept of how VCR worked. And I know, like a lot of academics is, can you get the concept itself? Are they mimicking the concepts you're looking for, or are they still not actually grasping what it is that, and they're just parroting back what they think you're looking for. [00:21:07] Speaker D: Oh, yeah, no, they've got no idea. They don't understand anything. There's no understanding. That's the thing. It's just, it's predictive text. It doesn't understand what you want to actually say. What it does is it takes the previous word and probably previous, you know, however many words and whatever prompt that you gave it, and then it is like, okay, well, this is the next best word, right? And that's all it does. It's a probability model. It's a statistical model that just says the probability of this being the best next word is, I don't know, 89% or whatever, and that's good enough. So we'll go with it. Right. That's basically what it does. So there's no understanding. Like, you can't just say, I mean, this is why. So recently, one of the big AI companies came out with their prompt for, that they use for their, like, to kind of set their chatbot up just to, like, for everybody, right? And it was like, you know, it was. What's the name of the company that does Claude is. It's one of the anthropic, I think. Yeah. Anyway, they do the Claude one, and it says, claude is a. I don't know, but it anthropomorphizes this bot, right, this chat bot, and it says that Claude always tries to help, but blah, blah, blah, Claude is helpful, Claude is nice. All this sort of stuff about how Claude is in order to kind of try to sway the weightings behind the probability model that's behind it. But even then I found it fascinating because it's really quite a, like, it's kind of like going up to bit. It's like going up to a computer, like, like a, I don't know. Um, let me just think of it like, like a search engine or something like that. It's like going to a search engine and saying, search engine, you are nice and kind. You are, you know, you're, you're, you were always helpful. You never send me, you know, you never returned any horrible stuff for me. You know, I mean, it's ridiculous, right? You just, you don't do that. But, like, what they, I mean, this thing had a kind of a two pronged approach in that that was part of the, kind of the sales pitch, I guess, in some ways. Like, they were releasing this as a, like, you know, part of their kind of, oh, here's how it all works behind the scenes. Right. But it was like the highest level, most anthropomorphizing part of it. And it seemed, I mean, it was this, and there was this thing about how these, the people who are even developing it are treating it like it's going to be able to understand context and it's going to be able to understand the world that it's in, but it can't. And it's really frustrating to see that come out from a company, you know? I mean, obviously they want to make a lot of money, and this is how they do it, so they're doing it right. But what they really should be doing is being much more transparent about the fact that, well, actually, no, these, they don't understand concept, you know, their context. They can't you know, they have no idea, like, all the stuff that you tell them. All it does is it changes some numbers on some weightings. Right? And I mean, when I say weightings, because it's basically that when you have probabilistic models, you weight, like, you can. You can add weights to things and say you want it to look more like x, you know, versus y. Right. And that's all it's doing, is it's fiddling with those numbers. Right. And that doesn't mean it understands any context. That's really infuriating. [00:24:25] Speaker A: It's so fascinating. I want to ask you a question, Zach, because obviously, was it just straight chat, GGP, that you used, or was it a specific. [00:24:33] Speaker C: It was. Yes. It was just chakraptee. Yeah, it was. [00:24:38] Speaker D: Can I ask a question, Zach? Sorry. I want to ask Zach a question as well, because, like, I think this is fascinating because I don't often get to talk to willing students who have actually done this. Usually it's in a much more adversarial situation. Right. [00:24:53] Speaker C: Much more like, you're at. You're like, it's the desk and you're like, why do you do this? Why are we doing this? Why are we having this conversation? I thought we were going to have such a good conversation beforehand, but anyway. [00:25:05] Speaker B: No, I'm just like, I don't need this today. [00:25:07] Speaker D: Yeah, exactly. I didn't. Do you know how much paperwork this is? Like, it's a lot of paperwork anyway. But, yeah, I'm just curious. Like, when you were, you know, obviously, when you. How much checking did you do before you submitted it? Did you even. Did you look through it and think, did you actually look through it? And what did you think when you looked through it? [00:25:28] Speaker C: So my mind was on. I had an email from the. From the teacher saying, all this work is due by noon today. And I looked and I had an hour and a half to do it, and my mind just went into panic. Panic. Uh oh. I'm not going to pass this course if I don't complete this. So I looked around, I said, no one's going to see me here if I'm in the corner of this library doing absolutely nothing and putting this into a language model and having it generate stuff for me. I did not do my due diligence looking to actually even cite if this was correct. I just put it in, copied it, pasted it verbatim, and sent it off course. Bummed. [00:26:14] Speaker D: I just wanted to know also with the. With the prompt that you gave it, did you just put in the straight up prompt, like the kind of the assessment outline type thing I did. [00:26:24] Speaker C: So if I'm remembering right, it was discussing and cross referencing what the supreme Court would talk about for different court cases, like what their discerning opinions would be and dissenting opinions, and it got multiple different dates wrong. I did not do any checking at all. And obviously it didn't use stuff like Oyas, I think, which is the United States Supreme Court case lookup. I'm probably butchering that name, but it's a greek name. Very good, though. First looked up Supreme Court cases. [00:27:04] Speaker D: That's really interesting because this is often what. Right. So what we're finding, starting to find now is that if you want to do your due diligence, it's actually really, it actually is almost as much work to go back through the essays that they generate and actually trying to fix them up as it is to actually just, you know, like, write it from scratch in the first place, because you have to, you know, essentially fact check it. Right. And fact checking is actually, in some ways, it's actually a lot more annoying than just writing it straight out and using what you've already got and what you already know about to actually write an essay. Right. So I think there's, it's, you know, the only thing that you really gain from it is if you just like, like you did is that if you just straight up copy and paste and forget about it and just send it off. Right, right. And just, you know, hope for the best. Right. But. But of course, you know, any self respecting academic or teacher is going to gonna pick, pick up on, you know, like, completely wrong dates and things like that. So you still, like, even if you. If you were trying to kind of get it past the academic, you still actually have to do quite a bit of work to do that. So, you know, in some ways, you don't gain much. [00:28:16] Speaker C: So I think that there, we've talked a lot. I know about the issues of essays. [00:28:23] Speaker B: Right. [00:28:23] Speaker C: And essays generate language models. Generating essays is scary. I think also the other flip side of the coin is the dreaded multiple choice problem that we all love and hate can be also. That is almost the more scary side of chaggy pt because you can put it in and it can easily just give you the answer and that it gives way more accurate most of the time, which is. Yeah, it's just scary. [00:28:56] Speaker B: It's interesting you mention the Supreme Court cases because I've been hearing stories about so many times, they will be like, I've heard of, like, lawyers who had tried to use this to look at a lot of cases, and it would get cases just completely wrong and just make up this information. And, like, we have these unchecked systems, these black boxes that we don't know what they're pulling from. And sometimes they just make us up stuff so they can seem proficient. So I'm thinking of the Turing test. You know how, like, if you can, I can say something that can fool somebody into believing that they're not a bot. Have you seen any cases where a essay is actually fooled and academic? [00:29:43] Speaker D: Oh, yeah, yeah. No, they can definitely fool professors. I mean, I think. But I don't think they can do it by themselves. I think it requires a human still to go in there and fix things up. I mean, they're getting better, so there may be a point at which they don't need to do that, but they. Even then, I mean, this is also a bit of a. It is increasingly clear to academics that the methods of assessment need to change. Right. What you'll get often now is you might get still the classic essay, but then the student will have to do a talk about it, or they'll have to do a video, or they'll have to do some other sort of in person work, or they'll have to show regular drafts every week or something like that. So there's a whole load of other work that needs to go into it to show that the student is actually doing the work themselves, or at least is, you know, engaged in the process of doing the work. And so it's really about showing that production process. So, yeah, like I said, or it's about, you know, because often what happens is, in Zack's case, for example, I bet if you'd been asked to give a five minute talk about what you'd written, you probably not have much of an idea at all. Right. And you might read it through once and then just kind of have a panic about it. Right. And so this is the sort of thing that some. This is a way that actually, I. Well, caught, I guess, is. I mean, is probably the term to use. Right. But this is one of the ways that I identified one student that I had was a couple of years back. So it was. It wasn't great. It was actually. Yeah. Anyway, but this is the way that you catch that style of cheating. Right. It's the same way that we used to catch essay mills as well. So if someone buys an essay, they usually don't read it before they submit it. And so if you get them to have to get up to give a talk about it and you ask them questions about it. They usually have not, like, they may have got a basic level talk because they've done some slides or whatever, but if you start to dig into the details they've got, they panic. They absolutely panic and they don't know what you're talking about. And, yeah, that's basically one of the key things to do. And, I mean, this is a little bit, you know, problematic sometimes because not everyone responds well to taught, like, just talking generally. And sometimes, you know, panic is not necessarily a sign of cheating. It can just be a sign of not liking, you know, like, not being a natural public speaker or not. You know, pressure. Obviously, speaking under pressure is really hard. And, you know, trying to kind of defend your work is quite a difficult thing to do if you're not used to doing that. So, you know, it's not the panic alone that we look for. It's the ability, you know, the lack of ability to actually answer the questions. Right. And so it's, you know, those are some of the methods that we're using now to kind of, well, at least patch over the current. Like, where are we going with this, right? Because at the moment, we're kind of in a transition phase where a lot of academics are starting to phase out assessments that require that can be, you know, gained by large language models and moving to other sorts of assessment. And. But that takes time, unfortunately. I mean, you. You've only got so many hours in the day. And I think this is also another thing that, like, I mean, academics are very busy and teachers are very busy, and you only usually get a certain amount of time to mark each paper, right? And so sometimes you're looking for particular key aspects of a paper, and if it happens to hit those key aspects, you just like, oh, yeah, that's, you know, good enough. Look at a bit of the content and the argument and whatever, and you give it a mark and you move on, right? Because you're only given a short period of time to give feedback. And that, especially if you're marking hundreds of essays, which takes forever, your brain just turns to mush, let me just tell you that. So, I mean, the more convincing these become, the more likely they are to kind of go under the radar, so to speak, because of academics just don't get the sort of time to really interrogate these sorts of papers. Like, we don't check every single reference. We don't check, you know, every single date. Like, we might pick a few at random or if something looks a bit fishy. We'll suss that out, you know, but it sometimes comes down to experience as well. So more experienced academics kind of know what to look for and, you know, you know, there's a lot of things that kind of. There are ways that these things can kind of still get through. Get through this process, right. Without being picked up. But. And I think that's just going to become easier. So that's why, you know, it's beholden upon us to actually change how we assess things. And that actually might be for the better anyway because, you know, essay writing's not always the best way to assess whether someone has actually, you know, taken in, taken in and internalized information and made something, you know, and been able to practically apply that in some way. Right. There are other ways that you can do that and it's actually, I think it's a bit of a bit of a kick up the butt, so to speak, for academics to do this because, I mean, I've been really into kind of alternative assessment for a while. I mean, I still have classic essays and some things, but just because students know what to do with that. Right. Whereas if you have something that's weird, you have to spend like three lessons just trying to sort, like trying to explain what. What you'd want them to actually do. And then you get all these emails all the time saying, are you sure you. That takes a lot of time, too. So there's a lot of reasons why we're in the situation we're in, but it's complicated. Right. But yeah, things are changing. [00:35:30] Speaker C: So I've had, of course, being the student during these times. I've had agreeable experiences, as you can probably have with where, you know, sometimes essays aren't the best to do. But I think some of my favorite assessments that, like, have been currently, like recently new ideas is like group discussions where you have to provide meaningful discussion for about a minute or two just on the topic that you researched or a public presentation. Say, for example, you're taking public speaking and you have to talk about a thing. Sure. You still have to type out an essay, you still have to hand it to the professor, but you have to give a speech on it and a meaningful speech on that one. Of course, I know that it's. Finding AI is going to be horrible. [00:36:24] Speaker D: It's going to be. [00:36:25] Speaker C: That's right. I should rephrase that. Finding where pupils or students use AI is going to be very hard to find, especially in the multiple choice sector, but also in the. Were generating text it can be easier, it can be more easily spotted in text generation, but it still exists as we know and as I know, because that personal experience. But I do, I am a fan, I am a fan of the alternate assessment types that you're talking about. [00:37:06] Speaker D: Yeah. I think the multiple choice question things actually really, um, really, really good point too, because I mean those are classically the very lazy, like not, not lazy. I mean I used to do those when I was at university. Right. And sometimes they work, they work well for certain sorts of, um, um, you know, for certain sorts of things but, but you know, they're, they're often sort of used as a kind of a default and I, and a lot of the, so a lot of, certainly in the UK and certainly in the post 92 universities in the UK, they've moved a lot away from exams where possible. And that's one of the things that we're doing more of is more kind of coursework, more in class work, more, you know, work that you can show that you're actually producing the work as you go versus having to cram everything in your head and study for an exam and then, you know, bombing that because sometimes just like life sucks, right? And so it's, the problem is then is that if you don't have this kind of balance of exams and coursework, you've got like four modules all wanting you to do coursework at once. They're all due at the same time at the end of the module and so you end up with a slightly different set of problems where you have students who are completely over, you know, inundated with work for the last like two or three weeks of term and then they bomb out and they use, you know, cheats of whatever kind, right, or they just don't hand stuff in or they drop out or whatever and all of those things suck. So there's been certainly the universities I've been at, there's been a lot of conscious effort put into how do you best pace out things like coursework, how do you best pace out due dates and those sorts of things as well? Because it's very easy to say, oh yes, there's an exam week and there's your exam and boom, I don't have to think about it until then. But then, I mean, and then there are ways of doing exams too, where, I mean online exams, obviously, that they're the most vulnerable to these sorts of things. But I mean the classic kind of sit in a hall for 3 hours and do, do a written exam is, you know, that old fashioned way, you know, you can't really have chat GPT, you know, quite yet on your, in your brain, right? So thank, thank goodness. But, yeah, but yeah, it's, it's, it's, it's, you know, I mean, it may come to that. I mean, but then, you know, we evolve as humans, too, right? So technologies become part of our everyday life and, you know, we, we're increasingly outsourcing our brains to our phones and computers anyway, you know, at what point do, like, I mean, I have no trust in Elon Musk's attempt at brains, chips and brains, but, you know, I mean, there, there is interest there, right? And so there may well be, you know, those sorts of things coming down the line, right? And, I mean, we don't really know what that's going to look like until we get there. But, you know, we have to kind of think about it because, you know, there's a lot of things at stake when you start to think about new technologies and how we integrate them into our lives in such a kind of an intimate fashion like that, right? [00:40:11] Speaker B: And I definitely know that, like, for online, I've had to do things like Proctor U and things like that where I, when I'm doing the test, somebody is watching me and watching exactly what's on my computer. So that way they can make sure I'm not looking at things or looking at test help and stuff like that. But I've done some of the open book ones where you could very easily look up a answer to a test. You don't because you have the integrity, but it's so easy that you could do that, even pre chat DPT. [00:40:48] Speaker D: I mean, open book exams are supposed to be about how you process the information and whether you understand the under. Like, I mean, usually they're written in such a way that the answers are not ones you just get. They're not factual answers straight out of a book, right? They're not the factual answer. I mean, they apply this to a different situation. So, you know, take an algorithm. You know, take a, if you're looking at chemistry or physics or something like that, you know, it's about solving a problem that is similar to, similar to other problems that it might be in the book or that you've solved before in class or whatever, and it's about new applications. But then if you're in the humanities like I am, it's not quite as, you know, you can't ask those same sorts of questions, right? But then those are the sorts of places where things like multiple choice are less likely to be used anyway. So it's then you can, you know, mess around with other sorts of assessments for that sort of thing. So it's about picking the right sort of questions, the right sorts of assessment, and it's rightly shaken up the academic world a bit, I think so, yeah. [00:41:55] Speaker B: Would you say part of this is the, like, impersonality of a lot of teaching? Cause, like, where, like, you don't really get to know your professor anymore. You don't really get to know them because, like, you even described the. The pressure of having so many students coming through your. Your course. You don't get a chance to really see with them. So I wondered how. How it would change if, like, a student got to know their professor more. Would they feel less desire to cheat? Like, like, Zach, if. If you got to know your professors even better or had a personal, you wouldn't want to let them down. I'm wondering if the impersonality of a lot of academics nowadays is leading to this increase in cheating. [00:42:44] Speaker D: So we have different size classes, so there's some that are really big and there are some that are really small, and we get cheaters in all of them. So I wouldn't say there's any difference, really. [00:42:55] Speaker B: Gotcha. [00:42:57] Speaker C: So for me, because the club, at a community college level, it's not going to be as many lecture type classes. It's more traditional. You know, sit down and you see the professor right there. [00:43:10] Speaker D: We. [00:43:11] Speaker C: He still gives a lecture or she still gives a lecture. But it's not the expansiveness of a lecture hall where you are as person. Like, I joke with the professor sometimes. I have, this semester, I have three person classes. I joke with the professor. He knows. He and she. Yeah, I know pretty much all of us in the class. It's. We have a good time. So if I'm in like this, like an environment, like, where it's traditional classroom, no, I'm most likely going to not cheat only because I know the professor. Like, I potentially ask, hey, what are your plans for the weekend? Like, are you going to be doing blah? Like, because they'll tell you stuff like that. But if it's a larger classroom or a larger lecture hall where, you know, it's not as intimate, you might not say, oh, the professor gonna look down my back and see exactly what I'm doing right now because they know exactly what's going to happen. No, they don't know you because they don't. They can't identify and sit down with every single one of their students. To get to know them at the level of a small person classroom. So larger classroom, the likelihood is more likely to be cheap, as you know. [00:44:35] Speaker D: Professor well, yeah, I mean, I think it's also just a head count, right. And, you know, per, per head, per capita you're going to get more cheats than people cheating in a big classroom than in a small, you know, I mean, just in terms of the numbers. Right. So, I mean, I think, I think it's, I mean, I don't, I mean, the problem is the pressure is the same, right. So that it's not, the reasons people use these is not because they want to get through university without writing an essay. It's because they have other things going on that mean they don't have the time to put that effort in. [00:45:13] Speaker A: I mean, certainly here in the UK, I think, and I don't know if this is the same us, but certainly the amount of the cost of living crisis means that student loans and don't cover your student living expenses and you have to go and get a job and that's likely going to be many, many more hours on top of your university work. So, Catherine, one quick question. Because you talked about project work like videos and stuff, is there a concern, and we can be a very simple yes or no with tools and there are some wonderful AI generative content tools which are taking you as a person or descript which we use per platform here has the ability to use AI voices. Is there a concern that that could start creeping in? [00:45:56] Speaker D: Well, obviously, I mean, yeah, I think there's, I mean, there's a lot of issues obviously with generative AI and, you know, things like, you know, the data that goes into them, the biases that they produce and things like that. And I think that, you know, students who use those need to be aware of that and also just, and also more simple things like, you know, the amount of energy and water that they use. Right. Just to actually run a testing thing and also to run the queries on these, it's not insignificant. So, I mean, I think the more complex these get, the more, more that's going to get, the more that's going to go up. But I think in terms of the actual, if you ignore all of that, I mean, there's obviously also issues to do with copyright, there's issues to do with potential, you know, personal data reuse and things like that. There's a, I mean, there's, there's a lot of issues with, with generative AI for project work. I mean, I think some of the positive size. I guess if you really want to get to the nitty gritty of the positive size, I mean, I think it can be useful as like a muse, you know, as a, as a kind of like Pinterest mood board type thing. But I'm, I personally am more concerned about the kind of more frivolous nature of those types of uses of it as well because I think that those are, I mean, it is so energy inefficient, right. It is very energy hungry. It has a lot of, you know, potentially. And then there's also all the, I mean, yeah. There's like the fact that it does reproduce these biases means that you're going to be almost in some ways kind of more limited in what you produce, right. If you're a creative person and you limit yourself to what an AI is kind of producing by prompt writing or whatever it is that, you know, if you, if you feel that's a, you know, an art form in itself, which I guess it is in some ways, right, but you're going to be limited by those models and the data that's put into them and how they've been trained and all that sort of thing. Right? And also, you're kind of, you know, you're coasting on the backs of a lot of people who've done the hard yards, right? The graft of writing, the graft of the creativity, the graft of drawing and painting and doing all the things that have gone into the testing of that model. And so I think that there's a legitimate claim for innovation there, right, in terms of your own creativity. But I think you have to recognize that that's where it's come from and that you need to acknowledge that as well. Right? And I think, yeah, this is where these tools cross over from being assistants like, you know, your editors and your Photoshop touch up or whatever, right, into the productive side of things. This is where. This is where it crosses over and this is where for me, certainly there's that line drawn, right. The production of new, rather than it being a tool, I think, is the key thing there. [00:49:02] Speaker A: I'll give you a really good example. I mean, obviously I'm not a student. I do real bit of studying for work like boring HR courses. But the really good example of where an AI pixelmator pro, they call it ML because we're using machine language. Soup enhancement. When I was working on the photos for my late grandparents order of services, particularly my nana's, I had loads of scanned images, like really old photos, some slides and I ran it through pixelmator pros, machine learning, enhance, and it made those usable print photos. It was able to do resolution scaling. That, I think, is absolutely fine. But if I was to say, oh, generate a photo of my nana and granddad standing on a beach, but they've never been to. Well, that's not. That's not. That's generative. And I think you make a really good point. Talked about copyright. Unfortunately, harsh reality is that a lot of these models are not trained or are not getting consent from the people whose images or works they are bringing together. Quickly. Before we move on to the last point, any thoughts from Zach or Jay on that? [00:50:07] Speaker B: Yes, I think what would be cool? Say, for instance, a professor could say, hey, I want you to put a template for the essay. I want this, this, this, that way you could put it into a word processor. The processor could put together the template. You still wrote everything, but it was laid out how the professor wanted. That could be a useful. [00:50:29] Speaker D: Yeah, but the professor should have that template. If there's a template, that should be part of the assessment. [00:50:35] Speaker B: I know, but a lot of professors I've known don't have it. Or at least, of course, I went through. [00:50:41] Speaker D: Yeah, because you crossed very quickly into it, giving you the ideas for what to write about is the problem. [00:50:46] Speaker B: That is true. Yeah. [00:50:48] Speaker C: So the problem, I would say, is that once you start getting the generative essay stuff, you start saying, hmm, maybe this is nice, and soon you might start to lean on it more as it is good. And don't get me wrong, some of this. Some of this assistive stuff, amazing. But it's when we lean on it too much until it starts to break, the crutch starts to go from a maybe you need on it, because the crutch is just supposed to be there to help you. So it's supposed to be there as a full time leg that's supposed to be. Supposed to be standing on. And that's where, like, the generative models that, like, check GBT or bing, whatever. I. I think it's Gemini. Or is that. [00:51:45] Speaker A: My final question? And this is mostly aimed at Professor Flake. In your experience, again, you mentioned you don't like the Tim AI, and I'm with you because machine learning, so be able to recognize patterns and be able to extract information from a group of data is incredibly valuable. And we're seeing. And again, I'm choosing my word, trying to choose my words carefully. We've seen machine learning being used so much more in medical diagnosis to be able to assess images like MRI scans, etcetera. Do you think there needs to be a greater awareness of decoupling genuinely helpful machine learning in research, in academia and medical? From the buzzword of AI, I think. [00:52:33] Speaker D: All the AI researchers that I know would love that. Every time I've spoken to some of my colleagues who do machine learning and recently, and they, like, you know, they, I can, their eyes are so rolled all of the time that, you know, they're popping out. You know, it's, they're just constantly just like, you know, frustrated with the hype around this, these large language models because it's, you know, I mean, it's frustrating because they're not being, they're not being marketed in a factual way, they're being hyped in a way that is not possible for them. And people are getting the impression that they can do what they can't do, which is, honestly, I mean, it's false advertising. I'm just waiting for the next, for the first false advertising, you know, litigation to come through. Right. Because, I mean, it'll happen. Right. And so I think, yeah, I mean, what I think needs to happen is, I mean, yeah, I mean, the AI term, I suspect, is unfortunately here to stay. But I think that, I'm hoping that a little bit with the crypto hype is that it gets a little bit poisonous at some point, the poison chalice, so to speak. And so people stop talking about it when it ultimately doesn't deliver what it claims to be able to. I'm hoping that we'll go back to actually describing things, how they're supposed to be, like machine learning and natural language processing and large language models and all of the adversarial networks and all of this stuff. These are the underlying algorithms that are used, and there are many, many, many of them. And they've all been kind of put under this big umbrella of AI because often you use multiple of these in conjunction with each other, right, to protect, to produce the best model. But, I mean, algorithms are like, these algorithms. I mean, people put a lot of magic into the word algorithm. I mean, an algorithm is just a bit of a computer program that tries to predict stuff, right? Well, in the uses that we're, I mean, there's lots of uses for the term algorithm, but in the kind of AI setting, it's generally just, it's a set of statistical methods that predict the next best do a prediction model, and that's all it is. And if anyone says otherwise, they're lying to you or they want to sell. [00:54:58] Speaker A: You something I say, well, Catherine, I'm very much aware that you have a teething young one to go and look after. So what we will do is we will get you to plug any links that you want to. I don't know if. I mean, he even mentioned. I don't know if you want to promote it. Promote EMF with it coming up. [00:55:17] Speaker D: I mean. I mean. I mean, so. I mean, I do a lot of things, I guess. Yeah, EMF is coming up. I mean, unfortunately, can't get any more tickets to that. But you can still submit a talk if you want to call for papers is up emfcamp.org. You can find the call for participation there. So if you're in the UK or outside and you can get to the UK and you want to give a talk and or a workshop and or a performance or music or whatever, bring a cool thing. Yeah, check that out. Otherwise, I guess my website is liedra.net comma, lidedra.net dot. You can find me on mastodon at catherineflickastedon me UK, where I'm also a moderator, which is good fun and I guess, yeah, what else have I been doing recently? I'm just doing a lot of video games, papers and things. So you can find all of those on my website. And, yeah, it's been really fun talking about this with you also. Thank you so much for inviting me. [00:56:15] Speaker A: No, our pleasure. And from me and Jay, who are both mastodon, me UK users, thank you to you and the entire mod team. It is such a nice experience. You guys do a great job. [00:56:25] Speaker B: I know. I feel so safe as a trans person, so I really appreciate it. [00:56:29] Speaker D: That's really kind of you to say. Thank you. I just want to say one thing to Zachary. You know, study hard, you know, do. Do your own work and you'll not regret it. [00:56:41] Speaker C: I've been enjoying it so far. [00:56:43] Speaker A: His mum's gonna. Thank you so much for that, Catherine. Thank you very much, Catherine, for joining us. Zach, thank you for your. I mean, genuinely, your openness and your honesty. We really, really appreciate it. Zach, you don't really. I mean, you're on some social. What if people want to get in touch? I mean, obviously you're in our discord. You're one of our mods. Is there anywhere else you want to point people to or. [00:57:13] Speaker C: So I am still on from last time. I'm still on the sinking ship that we call Twitter or X. You can find me there as the same tag as my discord. It's going to be Kylo 12343 don't ask me why, it was this 11th grade name or no, it was a 6th grade username. Other than that, I mean, I have an Instagram, but I have a lot of connections in the discord. So if anyone's interested, honestly, you can check me out and look at the different connections I have in my profile because it will point you to a lot of good places to reach me. I'm mostly using discord. [00:58:03] Speaker A: I thank you, Zach. And again, thank you for being here. Great second appearance on the show and hopefully we'll have, I think me and you've got a show sort of in the works in pre planning stage where a potential sort of hybrid podcast live show with Jay as well. So we will see how that comes out. Jay, this has been great. I mean, I know you haven't really, you are going back to university soon, but yeah. Any final thoughts from you while we're just rounding out and of course. [00:58:34] Speaker B: Yeah, well, I mean, my final thoughts is, Zach, I won't talk about your ex. And also. [00:58:41] Speaker A: What? Oh, Twitter. James, that's terrible. [00:58:47] Speaker C: I knew it was going to come up. [00:58:48] Speaker B: And also, I definitely want to follow and watch the educational space, but to everybody out there in school, please make sure that you're actually doing the work. Don't let the robot overlords take over all of your education. [00:59:05] Speaker A: No, I think be wise with the tools that you're using and always ask the question, what data is this taking? And what data of mine is it takes? What data is being input? What data is it coming from and what of mine is it being uploaded? So, yeah, absolutely. Well, listen, folks, until next time, come and obviously come and join us in the discord for discussion here. Professor Flick is in our discord as well and obviously follow the same ones. Oh, Jay, we're also now having, I haven't updated your outro, but you got us on blue sky, so. [00:59:35] Speaker B: Yep. You can find this at crossed. [00:59:41] Speaker A: Awesome source. Thank you. [00:59:45] Speaker B: Do you want me to mention the other place? [00:59:48] Speaker A: I really don't, but you're going to. [00:59:50] Speaker B: So go to YouTube, which you can [email protected]. YouTube. You can also find us on TikTok, crosswires.net TikTok. [01:00:00] Speaker A: And I think did we link our peer tube as well for those who want to use a diverse, we did. [01:00:04] Speaker B: By the time of the recordings will be up. So crossridge.net Peertube awesome. [01:00:09] Speaker A: And we're with til vids, which is awesome. All right, thank you, everyone. Thank you both. I will stop the recording and we'll see you next time. Thanks for listening to this episode of Crosswires. We hope you've enjoyed our discussion and we'd love to hear your thoughts, so please drop us a note over to podcastrosswires.net. Why not come and join us. Discord community [email protected]. Forward Slash discord we've got lots of text channels, we've even got voice channels, and we've got forum posts for every episode that we put out there. If you are Mastodon, you can also follow us either by heading over to wires social or just follow Crossedires Social. [01:00:49] Speaker B: If you'd like to check out more of our content, head on over to crossedwires.net YouTube for all our videos and keep keep an eye on our Twitch [email protected]. Live for our upcoming streams. [01:01:01] Speaker A: If you like what we heard, please do drop a review in your podcast directory of choice. It really does help spread the word about the show. [01:01:08] Speaker B: And of course, if you can spare even the smallest amount of financial support, we'd be incredibly grateful and you can support [email protected]. Crosswires that is Ko Fi.com crosswise wired. [01:01:22] Speaker A: Until next time, thanks for listening.

Other Episodes