
Behrend Talks: A Penn State Podcast
Join Dr. Ralph Ford, Chancellor of Penn State Behrend, and guests for conversations about interesting things happening in the Erie community.
Behrend Talks: A Penn State Podcast
The rapid evolution of AI, with Dr. Tiffany Petricini and Kyle Chalupczynski
Dr. Ralph Ford, chancellor of Penn State Behrend, talks with Dr. Tiffany Petricini, associate teaching professor of communication, and Kyle Chalupczynski, assistant teaching professor of management information systems, about their work with Behrend's Ad Hoc AI Task Force. Originally recorded on June 26, 2025.
Welcome to Behrend Talks, I'm Dr. Ralph Ford, chancellor of Penn State Behrend, and today we're diving into a topic that is truly transforming our world. We hear it in the news every day, you can't escape it, and that is artificial intelligence. And I am thrilled to have two faculty members here from Penn State Behrend with us today. T hey are leaders on our AI task force, that is Dr. Tiffany Petricini and Kyle Chalupczynski, and they have been instrumental in exploring how AI can be used across campus. We're going to dig in deep and they are truly engaged in the subject in a significant way. Welcome to the show to both of you.
Kyle Chalupczynski:Thank you, thank you for having me.
Ralph Ford:Well, appreciate it. Tiffany. You know we're going to go back and forth today, but I'll do a little introduction for each of you and Tiffany, it's great to have you here. You've got a very interesting background rhetoric, technology ethics from Duquesne, and that's really relevant with today's conversation that we're having. You are an associate teaching professor of communication and Kyle, you're an assistant teaching professor of management information systems and you have some nice experience. You worked for Paradigm Infotech I remember that company well when you were in Knowledge Park before coming here and also have some other great experience. But let's again welcome here and thanks both for being here. I want to give you each a moment to introduce yourself, and Tiffany, we'll start with you first. Tell us you know how did you end up at Behrend and what drew you down this interdisciplinary path, and how did you end up getting so fascinated with AI?
Tiffany Petricini:Yeah, thanks. Well, I think that you started the introduction well, setting it up. To explain my background rhetoric, technology and ethics I actually was really interested in what used to be called computer-mediated communication, so this was, even in the days, sort of pre-social media in the way that it is now. So I was just fascinated with the way that technology was impacting relationships and, of course, a significant part of relationships is communication. And so as I sort of traversed through my studies and I got into my graduate programs, technology was so central to everything that I was trying to understand in my own world. So I originally was studying social media. So my book Friendship and Technology it looks at the way that technology in general has impacted our friendships, but specifically social media. And so the one review I've had of my book the only review that's out yet the only critique that I got is that I wasn't thinking about AI and how AI impacts friendships, and it was fair, it was a really, really good criticism. So I wrote that book in 2020-ish and I just wasn't thinking about AI. It was there but it wasn't. We hadn't hit the Chat GPT boom yet, and so as soon as we did, I really really got interested and it just so happened that I was teaching at Penn State Shenango.
Tiffany Petricini:And when I was at Shenango, that's when I really had started getting interested in AI and I was just trying to understand why so many places were banning it. Why are we banning AI? What is happening? Why are we doing this? And I started looking and trying to understand the evidence. Where is the evidence? And it wasn't evidence-based. The institutions and their automatic sort of turning to banning was problematic. And so I had just started researching and I got connected with teaching and learning with technology at University Park and we sort of formed a research team and I got really lucky because Behrend was desperately trying to find someone to teach communication studies and at Behrend the communication department, their focus is media, and it was sort of just serendipitous, it was really lucky. They really needed someone. And so the school director, she reached out to me, Melanie, and she had said, hey, would you be interested at all in coming to Behrend? And I'm really really fortunate that she made that call.
Ralph Ford:That's a great story. We're happy you're here. And before I jump over to you, Kyle, you actually made me think of a few things. Do you remember something called ELISA, that program that they had a long time ago that was supposed to act like a therapist for you? People? Tried to communicate with you, right?
Tiffany Petricini:So I don't remember it, but I know of it.
Ralph Ford:Well, I've been around a little longer than you and I remember it, and I actually remember how poor it was, and after a few interactions you figured out pretty quickly that there wasn't any intelligence. Anyways, we'll go a lot further than that, but I guess what you made me think about with that intro is, from the moment computers have been created, people have been trying to figure out how to communicate with them, and I mean that's going to be part of the discussion today, so I think your topic is right. On. Kyle, let's jump over to you. Why don't you tell us a little bit about your background and how you got here as well?
Kyle Chalupczynski:Sure. So I always start out by saying that you know, ultimately I'm a computer nerd at heart. So regardless of where I ended up, I feel like I would have been kind of, you know, obsessed with AI to a degree. But, as you mentioned, I spent some time working with Paradigm Infotech as a contractor for GE Transportation. After that I went to Erie Insurance where I was in a business analyst role. Both roles were different flavors of the business analyst role and I really kind of fell in love with that way of thinking.
Kyle Chalupczynski:I've always enjoyed solving problems. So having the skill set to kind of systematically dissect problems or opportunities and create solutions and figure out better ways of doing things has always kind of just been something that I've enjoyed. So during my time as a business analyst I also spent some time, like I mentioned, helping out with our quality analyst team and doing some automation of our test suites with them. So I really found that I enjoyed the automation side of things as well. So with that background, you know, I was just going along doing my thing Sometimes I say flying under the radar, teaching my core MIS courses, and then, of course, about two and a half years ago, we had our ChatGPT moment and really everything changed and I won't get into the details because I think we'll cover those later just as far as what was going through my head and all of that.
Kyle Chalupczynski:But at the end of the day there was a lot of kind of contributing factors that really kicked things into high gear for me. One was, I guess, full transparency. I've always kind of dealt with imposter syndrome a little bit, and landing here at Behrend with no teaching experience, no background in instructional design, that was most definitely in full gear, right. So the first thing I thought was well, this is an opportunity to kind of start filling the gaps right, and to feel maybe like I belong here a little bit more. But then other things contributed as well, right, as we started to learn. Well, this is going to be very high, you know, in very high demand from employers. Well, the entire reason I'm here is to help students get jobs right. So that was another factor.
Kyle Chalupczynski:As I mentioned, regardless of what my job or role ended up being, I probably would have been playing with this stuff anyway. So now I'm just fortunate that I'm in an environment where I can dedicate a pretty significant amount of time to researching this and trying to stay on top of it as best I can and testing new things out and really almost kind of creating the classroom like a sandbox, figuring out what works. Nobody's going to have all of the right answers right up front, but the way that we're going to get to those answers is by trying different things, being agile, seeing what works and iterating quickly, and AI allows us to do that right. So that's essentially been it for the last two and a half years. It's been a whole lot of experimentation. Some things have not worked. A lot of things have worked and, like you said, as I'm sure we'll talk about, I think that my classrooms are a better experience. Because of it, I enjoy my work more. It's freed up time to do other things, and so I would say.
Ralph Ford:That's it in a nutshell. That's a great summary and we're all learning as we go through this and you know we could spend hours on this. So maybe this is the first of multiple podcasts on the subject, podcast on the subject. And I'll disclosure. You know my background is machine learning and spent my career in computer vision and neural networks, and the AI I learned when I was in school was focused on a lot of things like game theory and how you use logic and the like. So we've advanced so far. It truly amazes me. I can't believe some days what I see when I get responses to answers to problems that I look at. So we'll dig into all of that and it's a rapidly advancing field and, as you've said, I think it's. I know it's just a matter of we have to jump in and work with it and understand it for better or worse. We're going to get into a lot of those details. So you know, here's the question for both of you and we'll work through all of these, but for a long time.
Ralph Ford:Ai and, by the way, that AI is not new. We've been thinking about this. The movie Terminator right, it's the fear of AI coming get us, and robots and the like, but now it's starting to seem like these things actually have some potential, right? But let's stick with something a little more worldly right now, which is ChatGPT. It arrived in 2022 overnight. 100 million people are using it. Why was that such a big game changer? What happened that? We all were expecting that that showed up.
Tiffany Petricini:Well, you know, I think so there's different perspectives on this, and when I'm looking at this from sort of a historical point of view, it has a little bit of technical elements to it. So never before in human history have we had the computing power that we have now, and we have not had access to the amount of data required to make something this successful, you know. And so it just sort of all came together in a very ripe moment to sort of create this tool, and it really did take the world by storm. I think I had tried chatbots before. You know not Eliza, but I have tried other chatbots. You know you use them and they use different technologies. When you talk with a company and you're sort of trying to do pre-troubleshooting before you get a human being, and none of them were as capable as chat. Gpt was Kyle, though you have a different take on this, you know.
Kyle Chalupczynski:Like you said, I think that all of those technology factors have to converge in order for this to happen. But seeing on my timelines ChatGPT from , on social media and thinking, well, no, no, that's not, that's just not possible, and you can't tell me that somebody told a computer to write a letter from a CEO to whoever and it actually came out and it made sense, right. So I think a lot of people kind of found themselves in the same position where I was, where it was like no, I have to see this for myself, and I think that's really been a driving force here. I think OpenAI in particular is really good at capitalizing on those type of viral moments. When we saw the Studio Ghibli images of everybody and their dogs and pets and literally everything imaginable was Ghiblified.
Kyle Chalupczynski:We don't really see other labs do that. Now, I shouldn't say we don't see other labs do that, because that's kind of one of my kind of a barometer for me for keeping an eye on things. For example, just in the last couple of days, I started seeing on the timeline videos of animals competing in human Olympic events, and so that to me is okay. Look, there must be a new model out with new capabilities because we didn't have this quality before. So now I know that I have to, you know, start pulling on that thread and going down those rabbit holes. But, like you kind of alluded to Dr Ford, this stuff happens on a weekly, if not a daily basis and there's so many different players in the game In addition to you know, the core AI labs there's. The volume of those kind of viral moments we're seeing are a lot more frequent now.
Ralph Ford:Yeah, it's amazing, and each and every day, this is a truism. You know the AI you have today will only be, you know, it'll only be better tomorrow and that's going to be the case for quite some time. Let's, you know, Kyle, you want to. You like to get geeky, right? So we all know about chatbots, right, we interact with them. They can do tremendous, say, tasks for us. But what's interested me is this idea of agents. Right, sounds rather ominous. So there are AI agents that are secretly out there and I think they're behind some of the things that are happening in our lives or manufacturing processes that implement AI. Can you explain to me what these agents are and how they operate?
Kyle Chalupczynski:I can explain my understanding and hopefully, maybe Tiffany can, or even you can, fill in some of the gaps, because that is one of the that's kind of the one of the buzzwords of 2025. We, you know, we started hearing that this year this was going to be the big thing, and I think that there is absolutely a lot of truth to that. So, whereas with traditional AI that we've all become used to for in the last two and a half years, you know, I have to think of the or help have AI, help me build the perfectly worded prompt and make sure that it's got all of my criteria and stipulations in it, and then I have to go back and forth with the chatbot to continue giving it direction and continue giving it feedback. Agents are a lot more autonomous. So agents can they have kind of a high level task and they have access to a set of tools. Whoever creates the agent decides which tools they have access to, and then it essentially has the ability to make all the decisions for itself. So it knows that it has to achieve this high level goal. It uses the tools at its disposal. Sometimes it connects to additional agents. We're starting to see now multi-agent swarms are more effective than single agent approaches, but it's closer to kind of that fully autonomous silver bullet I press the button and the work gets done, used.
Kyle Chalupczynski:And I want to be careful with how I mix terminology here because I don't want to say anything that's wrong and there's kind of some debate on this right now.
Kyle Chalupczynski:But people have said that if you've used ChatGPT's O3 model, you've kind of experienced on a smaller scale what that agentic behavior is like. Because one of my favorite examples is if you take an image outside of any building or whatever and you send that to O3 and you tell it to be a geoguessr and then you watch its chain of thought and the different tools that it calls on to actually, in a lot of cases, figure. This is another one that went viral, another use case that went viral. It gets it right a lot of the time. It can nail your longitude and latitude just from taking in all the visual cues from the image that you provide. But it's building code within the chat to help it do that. It's checking weather reports. It's checking things that you wouldn't even think to check to come to that solution. So again, some people will say it's not an agent, some people say it is an agent. I just say that it helps give you an idea of how agents behave.
Tiffany Petricini:Yeah, Kyle did a really nice job, I think, of unpacking the idea that there are different ways to frame agent. You know so there's a lot there. So AI agents in a really general sense are just tools, you know, really basic tools that do things that we want them to do. They're like an agent that you know. A sports agent is someone who gets you a job, so you could think of it that way. Then there's this idea of agentic AI and I think that this is sort of maybe more like the term you know if you're familiar with artificial general intelligence.
Ralph Ford:And this is yeah.
Tiffany Petricini:So this is this idea that there are these really super powerful, super autonomous AI that are are acting without our understanding. News this has been big lately because there are all sorts of reports about AI being deceptive and trying to blackmail the companies when they threaten to shut it down. So this would be an example of the rhetoric that is sort of AGI related. I think there are that's sort of the science fiction realm, you know. I think that there are. There's agentic AI in the sense of a tool. There's agentic AI in the sense of a tool. There is what Kyle was talking about, which is incredible AI that has more autonomy, but there's no fully autonomous AI, and when we start thinking about that idea of fully autonomous AI, we sort of are getting out of the scope of things. That is focusing right now on the here and now, that the tools that we have, intelligent or not, that still have really big impacts on the work that we do, the way that we think and the way that we learn.
Ralph Ford:Yeah, I think that this is, you know, my understanding matches as well. You two are more in the expertise field than I am, but I've seen in industrial applications and the idea is you actually have agents that self-adapt right. They take feedback, they have a goal in mind and then they're able to adapt and figure out ways that we don't fully understand how they work. I mean, we understand that they're changing the underlying algorithm on which they operate, but it's not always immediately obvious and that they're trying to make a decision and that you know. You can imagine the use of that for good. You can imagine the scary part of it if it's used for not for good as well. But I see it in everything now from you know, my microsoft uh, office uh, comes up and tells me I can use agents to carry out tasks. I haven't fully figured that out yet. So that's interesting. I'm definitely at least in the realm where I'm working. I see it in industrial applications.
Kyle Chalupczynski:Could I jump in real quick, just because, Tiffany, you mentioned the recent articles about the blackmail and everything. That was actually something that we're probably going to get to later when we talk about, maybe, some of the concerns, but since you brought it up, I figured I'll jump on it now. I think and I'm not saying that you were doing this, tiffany, but I think that what happens a lot of times is that we see articles like that and I mean I see them all the time too. That was the latest one, and if you read those articles, almost none of them actually explain the background of what was going on there. So for somebody, for an uninformed consumer, that's going to sound scary, right, that's going to sound like well, this is most certainly Skynet and we should not be using this.
Kyle Chalupczynski:What, like I said, what they almost never mention is that and we'll use Anthropic as an example, because they were actually the ones who did that study is, a lot of times, when you see those scary headlines, it's because Anthropic is conducting very carefully designed experiments that specifically put AI in that position. They're almost essentially trying to bait AI into doing something like, for instance hey, if you threaten to turn me off, I'm going to, you know, blackmail the CEO or the developer, whoever is threatening to do that. But the entire reason Anthropic is doing that is so that we can study those conditions and build better guardrails for the system. So I think that's one caveat that's important to keep in mind, because I think that we'll continue to see, you know, those types of articles, anthropics going to continue to do that type of research, but even if it's not coming from them, I think that's one that's a risk that's, I think, particularly high for what the, as far as what, the public perception of AI is, and I'll leave it there so that we can talk about that later.
Ralph Ford:Yeah, I mean that's the classic red team attack sort of thing that you look at in cyber threats in the military and where you're trying to prepare yourself against those threats. It makes a lot of sense.
Kyle Chalupczynski:Yeah, it should be. Hey, thank goodness Anthropic is doing this, not. Hey, everybody, we can't have AI, because AI could do X, y, Z.
Ralph Ford:Well, somebody is going to do that, and that's why they need to do it, of course, right, so they need to be prepared. Well, let's switch to our room, let's switch to the academic world and, as we know, catgpt immediately changed, first and foremost, writing assignments, but it is permeating everywhere. I can give you examples, I won't right now, but it has changed the landscape. So let's talk. What do faculty need to know or think about in regards to AI? This is a huge question in their classrooms, but let's start, maybe at the low level, is it? You know? Is this the devil that we should turn our back on beyond writing? What are the changes? You know? What's the advice you give to faculty? Come up to you and say, geez, this scares the heck out of me, tiffany, why don't you go?
Tiffany Petricini:Yeah. So in the research that I've been doing I have been looking at student and faculty perceptions and now we started looking at staff perceptions and we just got a new result that really reinforces that faculty need to be very intentional about their decisions, both to include and exclude AI. So the latest research that we're doing it shows that using AI in a gen ed course in this case it was public speaking CAS 100, can actually increase the effectiveness of integrative thinking. So that's a gen ed pillar and we can see that over time. You know, having AI embedded and integrated in a course impacts integrative thinking positively. Now the only sort of I guess you would say like fence is if a student isn't exposed to AI in a way that they can actually integrate it in other courses. So, for example, if I embed AI in my class and I encourage my students to use it, I'm teaching them how to use it responsibly, reflectively, ethically, and then another instructor just bans it for the sake of banning it, because they don't understand it and they don't want to deal with it.
Tiffany Petricini:That is creating a problem. It's actually limiting students at the course-to-course level. Another research article that we published not too long ago actually looked at that same sort of concept as a source of inequity among students. So it's creating these inequities, sort of in access, in all sorts of different types of ways, but also in thought and learning, and so it can be really hard, I know, for faculty who are already so burnt out and have so many things they need to do and stay on top of. Administrative responsibilities are rising, class sizes are rising, and it can be easy just to say you know what, I'm banning it, but it can have real deep impacts on students, and so it's just something that faculty need to be aware of and mindful of.
Ralph Ford:Well, let's if both of you could, maybe let's make it real. You know, what are examples of how you have used AI effectively? Do you think to improve not just your efficiency? That's great and actually really important. But how about student learning?
Kyle Chalupczynski:I mean I'll say that I mean that's one of the reasons why I'm such a huge proponent of AI is because I, like I said, I feel like I've been able to maybe not completely, but certainly close some of the gaps in my understanding of instructional design and exactly what goes into building a well-designed course and how to actually achieve your learning objectives. Achieve your learning objectives, I would say to just to kind of tie in both questions. You know to any faculty that are kind of feeling anxiety around this. I 100% get that and it might not seem like it, but I came from that same exact place. I had the period where I was spinning my wheels and essentially accepting the fact that the rest of my life was going to be, or the rest of my career was going to be, chasing down academic integrity violations, because there was no way that that wasn't going to happen, right, but I decided that there was. That was not any. You know, that wasn't a good future, that wasn't a good way to spend my time in the classroom. That wasn't doing anything or going to do anything for the students.
Kyle Chalupczynski:It was becoming increasingly clear that the majority of students were going to be using AI no matter what, and it also became clear that not many students were inherently adept with AI. They were treating it like a traditional search engine or like Google. At the same time, we were seeing an increase, like I said, in businesses looking to hire AI savvy students. So the natural progression then was to require students to use it, but to make them use it in a specific way. And again, I didn't start out doing everything at once. There was a whole lot of experimentation. To Tiffany's point, yes, you know, time is fleeting. As far as you know, do we actually have the bandwidth to completely redesign all of our courses? But I'll also say that it was a lot easier than I expected, right? So it was as simple as starting out and feeding all of my assignments to AI and first seeing how AI handled them right, just to kind of get a benchmark of what might a student do if they just did the bare minimum. Handed it.
Kyle Chalupczynski:The assignment said do this for me, and, as you can probably guess, most of the results were A or B papers or assignments. So then the next step was OK, now you know about all of these assignments, suggest maybe three or four of them that I could revise and integrate AI in some way, so that we're still getting hitting the core learning objectives from whatever that chapter or topic was. But we're also adding something to the end and I was really lucky, or fortunate, that teaching MIS you know, that's a major that really lends itself well to AI, because right now every company, or almost every company on the face of the earth, is trying to figure out how do we use this technology to to solve our problems, to make more money, and that's at the core. That's what MIS is all about, right, so that that did, like I said, give me a lot of room for experimentation. But I will say that, just kind of out of curiosity, I've stepped outside of the borders of MIS a little bit and just to see, you know, because and not that this is not that there's any one faculty member going no, kyle, you know finance is going to be untouched by AI but just out of curiosity, because early on we heard, you know AI doesn't handle math so well, so started testing things out like that, saying, ok, what would a finance assignment look like with just a little bit of AI baked in?
Kyle Chalupczynski:I think that there are a lot of opportunities that probably aren't readily apparent on the surface, just because we're not used to thinking in this kind of AI co-pilot relationship. But ultimately it's starting small, it's making little changes, it's accepting that, again, not all of them are going to work, but we can now iterate pretty quickly on different things and try different things in the classroom, and even if things don't work, that's still a learning opportunity for the students, right? Because you can explain to them why something didn't work the way that it did, and then you're highlighting one of the limitations of AI. So the quickest way to kind of, you know, dip your toes in is to actually jump right in.
Ralph Ford:Thanks, Kyle. Tiffany. Yeah, go ahead, Tiffany.
Tiffany Petricini:Yeah. So, like Kyle, I've used it. You know, on my end, you know I use it for design, I use it for playing the role of a student. So, for example, I feed it my syllabus and I might say pretend you're a first generation student. Is there anything about this syllabus that could be more inclusive? Or this policy or X policy? And it's really helpful on the student end. I definitely I incorporate it. I try not to do just like one and done assignments.
Tiffany Petricini:It's embedded because students it's 100% true, students are using it. They're not going to not use it. They're just going to not tell you they're using it if it's banned. And so when you find a way to incorporate it, you're going to be able to start teaching students about the thought process itself. So the evidence is coming out. You know it's all over the place, but we are seeing that there are impacts on critical thinking and critical thought. But the studies, they're showing that if you think first and then you use AI to amplify that, it's even better than just working on your own. And so that's where we need to come in and we need to teach students that it's not about replacing, it's about refining.
Tiffany Petricini:So in the public speaking course we use it from the beginning. So first I have them write an outline and then I have them have AI critique their outline. I have them write a speech and then I have AI critique that speech. Then I have them have AI write a speech and then they critique it. It's really important for them to see. It's sort of like what Kyle said, where you just have to keep trying it on your own because you can't know the limits, because AI is so personalized like the uses are so personalized that you won't know that you've hit a limit until you get there. I also the Commonwealth Campus Teaching Support Team is a really, really great group of individuals with some great resources.
Tiffany Petricini:And they have a resource where they talk about common learning challenges for students and this is like across disciplines. So this is not just communications related and when I found that, I found it was a really good in as far as student learning goes. So students hands down really have problems with brainstorming. I think so much of my office hour time was spent with students trying to brainstorm topic ideas for a speech and it's unnatural. I tell them that that in the real world they're not going to pull a topic out of a hat. In the real world, if they're going to speak in front of an audience it's because they've been invited, because they're talking about something particular, you know. So brainstorming is so difficult and I walk through exercises to show them how AI can assist in brainstorming.
Tiffany Petricini:Organization is another learning challenge and AI can help with that. Organization, revisions, proofreading AI can help with that. And you know, some of the arguments are that AI shouldn't replace an instructor, and that is 100% true. But on the other hand, we've been replacing instructors with teaching assistants to do these very type of things for a very long time. When we started putting TAs into classrooms, we were already starting to put distance between the instructor and the student and I think that AI it's not a replacement for a teacher ever, but it's also a great step and a great tool for students who are anxious or struggling. They can start there and follow up with their instructor. So that's sort of some ways that I've used it as far as student learning goes so far.
Ralph Ford:Thank you both for your examples, in that if we can switch and this is this is the trick you know switch to from just using it to give the answer that's the obvious problem to using it as your, your mentor, your teacher, your learning assistant, your tutor. That's where real learning can happen. And I'll give you one quick example, something that I tried so to your point. Kyle, you made me think about this. Can I do math? At least the one I'm using can, and I uploaded.
Ralph Ford:I took a circuit that I use in a class that I teach very common, but it's third level, so it's not simple and it's nonlinear, and I scribbled it on a piece of paper as legibly as I could, put the currents and voltages there, and I asked it to solve the circuit and give me an explanation, and it, frankly, it did as good of a job as I could have done as a faculty member. Now it looks like a textbook solution. But then the real interesting part to me was I started to think well, that's okay, but you know what's the next level? There are different parameters in the circuit that make it do different interesting things, you know, and I'll get too geeky, but drive it into saturation or use it as an amplifier. And then I found I could start to ask it questions about what happens, the what, if, and then if.
Ralph Ford:If we can start to get that level of thinking, we get to the personalized tutor and the personalized learning. Isn't that, you know? What fascinates me about that is I think that's the goal we want, right, we're the teachers. A student would never get to that circuit without my help. They wouldn't know to just go look at that, but I get them that far and then they can do some learning and prepare for the exam on their own. Anyways, a little bit of my own insertion there, but I think that's the real trick we're going to face.
Kyle Chalupczynski:Yeah, I absolutely agree and I think that there are. There are definitely students that do understand that this isn't just the next. You know, this isn't just hype and, yes, there's hype around it, but this isn't just the next. You know, hyped up technology. There's going to be worldwide ramifications here. The students that really latch on to that and really use the technology intelligently and how we're hoping and trying to teach them to use it they do things that just are incredible.
Kyle Chalupczynski:We've talked about having our minds blown on a somewhat periodic basis by AI. It happens to me with students as well now, and, that said, I do think that that is a minority of students right now. I think that kind of the base assumption is oh, another new technology, it's the same as virtual reality, social media or Bitcoin or whatever. But we know that it's not right. We know that the world governments didn't pump trillions of dollars into making sure that Bitcoin or blockchain was a success, right, or the same for social media or VR AR. So it's really, I think, A, communicating that to students, but then B, I think the next challenge is figuring out how to teach them to be curious.
Kyle Chalupczynski:I think that, you know, I see, with those higher performing students. That seems to be the missing. They're naturally curious. They naturally want to go down those rabbit holes and you know that with AI you can go down as deep as you want with a rabbit hole. If a student isn't inherently curious, I think that's a real challenge, At least that's in the last year specifically. What I've come to realize is kind of my next hurdle is how do I teach them to be curious?
Tiffany Petricini:This is Kyle. I'm really excited you brought this up, because the study that I'm wrapping up also looked at curiosity and the way that AI, integrated into the classroom, may impact curiosity, and students came into the class very curious, which was really really great, and they left the class very curious, and so it means that AI does not kill their curiosity. So just because they can turn to the tool, it doesn't mean that it kills it. But there was an interesting connection between curiosity and stress, so that it's not even about like sort of sparking their curiosity. You know, our students are really really curious already and they're interested, but it's about teaching them how to navigate the stress and sort of be mindful.
Tiffany Petricini:I guess is what you'd say and this goes back to what you said, dr Ford, which is that you know that's really our role as instructors. That is where we are moving toward. We aren't just these suppliers of knowledge who dump knowledge into their brains and they walk away. You know we're not just teaching them how to use the tools. That's not what it's about. We're teaching them how to use the tools effectively and we're teaching them content matter, and so that's it's sort of like. You know, education AI is really really finally pushing us into a framework where we're educating the whole person, which is something we do, of course we do at Bearing.
Tiffany Petricini:But you know, I think it really really is starting to draw out that education can be and should be and needs to be so much more.
Ralph Ford:Yeah, you know I'll take it to the next level and I'll give you some stats, you know. And first another story I'm speaking to a high-level industry leader, Behrend a graduate recently, and she said we don't hire anybody now without them indicating some level of understanding of AI. And there are plenty of surveys you know that show LinkedIn, microsoft, all these companies you know high percentage. They want AI skills. Not only that, students are afraid of AI more so than faculty, more so than our staff. We've got the data to show that. And students want to be prepared for this AI world. So, you know, I'd like to dig into that further. And maybe the first question is how do we fare it out? How does industry, when they're trying to hire somebody, you know industry and they're here interviewing a student on campus, how do they know the curious one who used it in a really good fashion? And those who've been I don't know, I'll call it posers are just using it and getting by by not using it as ethically. Is there any way for us to figure that out?
Tiffany Petricini:Well, in my class in CAS 100, as we go through these AI exercises, one thing that they have to do is an exercise where they put in their AI skills that they've gained in the class in their resume directly.
Tiffany Petricini:So that may be a specific assignment, that may be something they've built, and I think that teaches them the language that they need to explain that, because that's what we do. You know, that's what we do within our disciplines. We have to teach students how to enter the conversations and they need to explain that, because that's what we do. You know, that's what we do within our disciplines. We have to teach students how to enter the conversations and they have to know how to do that, and I think that's a really good giveaway to someone who knows how to use the tools effectively versus someone who is sort of a you know poser. I'm using air quotes for that.
Kyle Chalupczynski:But you know someone who's using it.
Tiffany Petricini:Well, they know. Like Kyle, I really appreciate every time I get to sit in on one of his talks, because I learn every single time and Kyle talked about imposter syndrome. But I feel every time I'm sitting in on one of his talks I feel like, oh my gosh, he knows so much and you know. So. When I go to professional development opportunities, there are people who are just they're talking but they're not really saying anything, and I think that now it's easy to get looped in, especially with LinkedIn. Everybody's an AI expert, everybody's an AI coach, everybody's an AI consultant and it's easy to throw that label on there. But you start to see the people who really know what they're talking about and the people who don't. But you start to see the people who really know what they're talking about and the people who don't.
Kyle Chalupczynski:And that's part of knowing how to use the language and enter the conversation, and I'll just add that I mean, I think that the now, if we're talking about like a pre-screening, yeah, that becomes a little bit more difficult. First of all, I'll say that I love to hear, tiffany, that you're doing the, how you're having them document their AI usage like that, because I'm doing something similar with their. They have to create an e-portfolio that focuses on, you know, some of their favorite ways that they use AI throughout the semester.
Ralph Ford:Yeah, I think I'd like to move on to a next subject, if you two don't mind, which is how you're bringing what your expertise to our students and to a larger audience. So first of all, internally I'll just throw this out here and you can both answer we're going to create some, we not going to. We have created AI certificate programs. We've got an AI course here this fall which I've heard is already well sold out, and I think we're trying to get more sections of that. Then you're offering a continuing education program called AI Essentials for a professional seminar here in the next few weeks. So talk us through what's going on in the curricular side, what you are developing and what others here at Behrend are as well. I know there are many involved.
Tiffany Petricini:Yeah, there is a lot right now. There's a lot and you know it's moving so quickly. I should say that the class that you mentioned is Humanities 220N, so that's called AI and the Human Experience, and we are already at 50 students and that's two sections. So we increased the class size so it can take 40 now, but it's almost full again and that is a seminal course for both AI certificates. So if they are in the more technical one or if they are in the more general one, they still have to take this course. And I have been designing the course and I'm pretty excited about it. I have it almost finished. So I'm sort of, you know, I feel like almost it's like a Christmas present. I can't wait to get the class on the first day and reveal to the students and let them unpack everything that we're going to do. It's fun, it's really fun.
Tiffany Petricini:Something else that Kyle and I are working on, and just a little preview University Park started something called the AI Arcade, and the AI Arcade is a resource for students and faculty too, but mostly it's student focused and it supplies students with pro versions of some of the major AI tools. So ChatGPT Pro, which is $200 a month. I've only used it through the TLT MidJourney. And then Suno, and Suno is a music tool, and so we're working on bringing that to Behrend. So we're pretty excited about that because it will also help with some of the issues associated with equity.
Tiffany Petricini:There are very few students who can afford $200 a month for a ChatGPT Pro account. I don't have one, but bringing the AI Arcade will help it really will. It'll have a dedicated space account. I don't have one, but bringing the AI arcade will help. You know it really will. It'll have a dedicated space and we don't know where that will be yet whether probably the library, but it may be somewhere else, but it's a dedicated space where students can come in and actually use the tools and use them well. So those are two very exciting things I'm thinking about, but there's so many more.
Ralph Ford:Absolutely, Kyle, you want to add?
Kyle Chalupczynski:Yeah, I'll also say that I found myself getting excited for fall semester way too early. But yeah, there's, you know, as far as just from the student level. You know, tiffany and I have both talked about how we've integrated it into some of our classes a little bit. But it's not just us, there's some. There's close to right now that we know of 30 courses at Behrend that have AI integrated in some way, shape or form, and we're actually working on getting a website together, because that's something that students have been asking for is, after they've gone through my MIS 204 class it's Mr C. What other classes can I take that have AI integrated? Right, we also have the, the AI certificates that we offer to students. So those have been in development for about a year and a half but they're officially launching in the fall. So we have a more technical one, the certificate in building artificial intelligence, and then there's the interdisciplinary certificate in artificial intelligence. So we did.
Kyle Chalupczynski:There was a lot of effort that went into making sure that those were cross-disciplinary, because that's kind of the way that we see things are headed here. We're also again looking at what can we do for faculty and staff. I know Tiffany mentioned some of that from a business partnership side. There's a lot of interesting stuff going on in our innovation through collaboration program that links students up with typically smaller, medium sized businesses in the tri-state area. We started giving students chat, gpt plus access for those projects and we've seen both the quality and the quantity of student work on those projects just absolutely skyrocket and the client satisfaction has gone along with it. So we're starting to look at really making that part of the core experience.
Kyle Chalupczynski:It kind of organically happened and the ITC program kind of morphed into ITC plus AI. But even especially with one of our projects that we had with Recap Mason Jars over the last spring, we were able to really kind of refine that process so that we think we have kind of a repeatable and a pretty solid business model for really adding significant value, creating, you know, custom chatbots for organizations or things like that stuff that you know they might be paying a consulting firm thousands and thousands of dollars for our students are able to do that. So it's A it's really cool to be able to see them provide value for local businesses. But it's also, you know, I told the students that were on the last project and they already had internships and even job offers lined up. So it wasn't necessarily that big of a or, as far as they were concerned, not that big of a deal. But I said absolutely like put this on your resume. The fact that you can build a custom chatbot for an organization makes you extremely attractive to potential employers.
Tiffany Petricini:Something I want to add for you know just sort of on this. You know momentum that we're talking about. You mentioned the AI Essentials for Professionals, dr Ford, and that is coming up next month, which is a great opportunity. Kyle and I are teaching a class it's on Zoom, and it takes you through just sort of the basics of AI, what it is, how to use it, why it's important and also exciting is that we are doing separate staff trainings.
Tiffany Petricini:So part of my research team and I what we've tried to do is we've tried to make sure that we recognize that staff are integral to the student experience. Staff are integral to learning and they also need to know what AI is and how it works. But you know, in the higher ed work they're often sort of ignored, and so that's something that's been important to the whole task force. We have Jacob Marsh on the team and we have other sort of staff that are involved, and it's been really important to make sure that their voices are heard as we start to think about AI at Behrend, and I'm really excited and I'm sure Kyle is very excited too to do staff training Right now. We have two ideas in mind for the summer a sort of basics and then beyond the basics and then, based on feedback, we hope to really start tailoring it to different staff needs as we move forward.
Ralph Ford:Well, I really love the fact that I'd be remiss in not recognizing there are a lot of faculty staff. There are students on your task force. This to me, is one that seems very much a self-forming group. Love to see that we didn't have to come in and say, hey, you need to look at AI. You all have really just said we're going to lead the way. That's the best sort of team that you can create, and so I'm really incredibly happy and proud in the fact that we are looking across the whole organization, from how staff can use it and the like. So you have my true kudos to you and the entire team, and we'll want to make sure that we do recognize all the people involved. I know we can't do it by name, but it is a significant effort. And that brings me to my next question. You're coming with the idea of an AI center. What would that be? What sort of things would an AI center here do?
Kyle Chalupczynski:Sure, I can try. So you know, I think, as you've seen kind of throughout the course of this discussion, there's a lot of great stuff going on and we didn't even get a chance to talk about all of it. But to a degree it's kind of happening in silos. Tiffany and I aren't the only one in the AI task force, even aren't the only people that are excited about AI. There's all these different pockets of excitement throughout the college. We need some kind of structure to kind of capture all of that, to be able to have a place to point people towards when they have questions, whether that's students, faculty staff, community. There's so much that's already going on. We need a way to and the AI Center is our effort to do that is again to kind of bring that to the surface show what we're doing and really position ourselves as the leaders of AI in this area.
Tiffany Petricini:Yeah, I think everything that Kyle said is perfect, 100%.
Tiffany Petricini:We need to position ourselves strategically because we are doing great things with AI.
Tiffany Petricini:We need to position ourselves strategically because we are doing great things with AI. I also think that one of the things that an established center so a formal center would do is really really help make sure that we have the resources available to develop the infrastructure to continue with our AI efforts for as long as we possibly can. You know, right now we're learning as we go and so we're learning, and you know we're building the website and we're doing the trainings. But having an actual center as a location that external partners and internal partners can see would really really be helpful for staff to be able to say hey, I want to upskill, I'd love to know how to use AI for advising, and then also having someone from an industry come in and say I'd love to upskill my employees and this is already happening, actually, by fellows who have been interested in this. Having a center as the central hub would, I think, really draw in the importance and hit home the importance of needing to have the resources to keep this sustainable for the long term.
Kyle Chalupczynski:And I'll also add, just to maybe bring everything full circle right when I started out at Behrend, probably within my first year, I remember saying to my coming home and saying to my wife like, like, the college just needs a business analyst, right? And that of course that's the way I thought of things, because I was a business analyst, I was looking at things through a business analyst's eye. The college needs a business analyst, right? Somebody that can, you know, sit down with a staff member and understand all of their pain points and understand all the different systems that they're using and come up with some better way to do it to make their lives easier, right? And I saw all kinds of opportunities for that.
Kyle Chalupczynski:I think those opportunities are now even lower hanging fruit. And one thing that I've kind of envisioned for an AI center of excellence would be, you know, having student resources, obviously that are that are employed by the center. And my idea is again if you have those students put on their business analyst hats and do those things and solve, you know, maybe throughout the course of a semester, three or four different problems and save faculty or staff tens or even more dozens of hours, a student that can put that on their resume is going to get snatched up so quickly because the generalist and the AI literate student that everybody is looking for right now.
Ralph Ford:Yeah, I can really see the vision that you both are painting and it is truly expansive and I know we're going to get there in one form or another. But I'm going to. We're coming to the close of our time. I'm going to ask you both a question. It's not a fair question, but I'm still going to ask it anyway. It's fair in that I can ask it. But you know what's your long-term view? Do you think this is a fad? Is this going to profoundly change our lives? If so, how? And Tiffany, why don't you go first?
Tiffany Petricini:Yeah, so I study media ecology, and media ecology is the study of the way that media affects environments in a really, really simplistic phrasing. And so, 100% AI is going to change the way that we understand our world, that we relate to each other, and it's even going to change our own consciousness. So this happened with writing. Writing is a human technology, and the birth of democracy and the time of Aristotle and Plato and Socrates and then Rome all of those major developments in human thought were because of writing. And then we had another revolution and that was with the invention of print. You know, at that time knowledge was no longer interiorized, it was shareable, and we had all sorts of revelations and thought. You know, intelligence 100% is just as transformative and revolutionary as these other transformations in communicative technology. It is going 100% to reshape the way that we think, we relate and that we learn.
Ralph Ford:Wow. Thank you, Kyle.
Kyle Chalupczynski:Yeah, that is an extremely difficult question and I'm not going to do it as much justice as Tiffany did.
Kyle Chalupczynski:I'm a person that, and I'm sure we all are to a degree, but I love certainty and so while I've been, you know, like I said, kind of a kid in a candy store with AI for the last two and a half years, that's all been intertwined with a little bit of turmoil, because whether it's what does this mean for my job, with a little bit of turmoil because whether it's what does this mean for my job.
Kyle Chalupczynski:I think one of the last lists that I saw has business professors in the top 20 most replaceable jobs by AI, right? What does this mean? What should I be doing? What should I be pushing my four-year-old son towards? Right? There's so much uncertainty around this and when we try to think about how this is going to change our lives, and when we try to think about how this is going to change our lives, if any had any clue that, or even you know, any inkling that social media, or Facebook specifically, could one day be deciding elections, right? So we were currently thinking kind of within the constraints of our current understanding, our current worldview, and the way that everything works, I think that we're going to see incredible stuff, right?
Kyle Chalupczynski:The stuff that Tiffany mentioned. You know that's going to change everything, and other stuff too, right? One of the most fascinating areas for me is looking at what they're potentially going to be able to do with this in the biotech industry, and I won't go down that rabbit hole because, as I'm sure you can guess, it's a deep one, but there's so much exciting stuff that could potentially happen that we can't even really wrap our heads around. I think so. Then, if I had to say how is this going to change things for us? I think the one certain thing is going to be there's going to be quite a few years of uncertainty here and just figuring things out.
Kyle Chalupczynski:If it was and Dr Ford, I think you follow Ethan Mollick he says you know that even if AI stopped advancing today, we'd still have five or 10 years of figuring out what exactly this means for us and how exactly we can leverage it Right. But we know it's not changing and we know it's not slowing down yet, and we know that. You know the scaling laws that you know the AI labs have identified all seem like they're going to hold true for the foreseeable future. That's what makes things I don't want to again. I don't use the word scary because I think it's all extremely interesting at the same time, but there's certainly a lot of uncertainty.
Ralph Ford:Well, both great answers, and it's always hard because people are notoriously bad at predicting the future, and that doesn't mean that you didn't both just make some great predictions. We'll come back in five years and replay this and see how close we were, but you made me think about those early days when the Facebook came out and we were all scratching our head as to why anyone would be even interested in such a thing. So you're correct, these things just take down a life of their own and we will see where they go. But we will navigate that uncertainty. I think you just have to embrace it and say I see no other way, because it's not leaving us. It's going to be here and we need visionaries and people who will jump in, like both of you.
Ralph Ford:So, Tiffany and Kyle, I mean thank you so much. This has been an incredible, forward-looking, engaging conversation. Your work on the AI task force, working with all the faculty and staff here, is truly inspiring and you're really looking towards the future, and that that's what we need to be doing all the time To our listeners. Thank you again for tuning in to another episode of Behrend Talks. I am Chancellor Ralph Ford. I appreciate you joining us. Thank you, and we'll see you next time.