Associate Professor of Computer Science and Human-Computer Interaction
I teach three courses per semester while I am actively involved in serving the university, mentoring students, working on my personal research, and overseeing student research. I started in the Fall of 2013 as an instructor and in the Fall of 2018 I moved up to Assistant Professor after completing my Ph.D. I was promoted to Associate Professor in 2022.
Google (Summer 2024)
Research Consultant
Subject matter expert on generative AI for the open source Computer Science curriculum by Google.
iHeart Media (5 Years)
UX Research and Leadership (Contract)
I started as a Senior UX Researcher (2 years) and was then asked to lead the team through multiple transitions as Director of UX (2 years). Finally, I stepped back into the research work that I enjoy so much as a Principal UX Researcher (current). When people ask why I enjoy working for iHeart, the answer is simple: the problems are fun to solve and the people are fantastic.
Previous Experience:
Faithlife
UX Researcher
I spent the summer of 2018 as a UX researcher for Faithlife, helping lead problem and solutions discovery, design, and user testing for a brand new product. See the card about "Child Check-in" above to see my portfolio artifacts generated from this job.
Testimonials:
"10/10 would hire full time." - Leah Bradford, Faithlife UX Designer
May 2018 - August 2018
Eastern European Mission
Front End UX
Created beautiful front-end experiences, led users and stakeholders in design discovery sessions, ethnography with missionaries and major donors.
June 2010 - November 2012
Halff Associates
Software Developer
Developed web and desktop applications to support the work of civil engineers using C, C#, and PHP. Halff is a regional full service engineering firm based in Dallas, TX.
May 2006 - June 2010
Live Radio UX
This project took place 2021-2022. Since 2019, I've worked at iHeartMedia as a Senior UX Researcher leading research efforts for internal applications. One suite of applications is for sales professionals. The other is for talent (DJs) in the studio. This project centers on building software for talent to use live on the air in their studios from the ground up. The previous software they had used was designed and implemented in the 1980's and had only received minor incremental updates since then. This project aimed to make it a modern, cloud-native web app. This project was so massive that fully encapsulating it here in this case study will be impossible - so feel free to ask me questions about individual pieces.
A few months into this project, I was made the Interim Director of UX at iHeartMedia, which meant that not only was I doing the UX research for the project, I was also managing all UX efforts on it, including two designers and an intern.
We started the project with several important research questions developed from UX and business stakeholders:
How do talent use the current software to do their jobs?
What do talent need or want that they don't get now from the current software?
In what ways can a cloud app and AI remove burdens from talent?
What processes do talent perform now that we can rethink and streamline?
In order to answer these questions, I started with a series of interviews with talent.
Initial Interviews
I started work on this project by talking with and learning from the business analyst that had been working on the project for the past year and had developed relationships with business stakeholders. I then read as much documentation as I could on the application. Note that each of the following documents are quite a bit longer than the screenshot can show.
I had previously done quite a bit of research on live shows, but for a different portal that would be used by production directors. I was therefore asked at the outset of this project to go back through old user interviews that I had done and synthesize what we know about live shows into a one-pager for management consumption.
I then began interviewing selected top talent from across the nation in focus groups to get a better understanding of how talent do their jobs, what it's like to run a live radio show, how they use the current software, and what the new cloud-native software needs to be able to do to support their work.
Design Sprint
The next step was to run a design sprint, which was first pioneered at Google. The lead designer on the projet, Tanya, put this sprint together and ran it. I participated, watched, took notes, and sythesized data from each day of the sprint to better aid our process.
The design sprint was a total of four days: three days in a row, then a week to prototype, and a final day for the finale and validation. Each day we had iHeart's top talent on the call and participating in the design exercises. Buy-in from stakeholders is key in design and this design sprint was perfect for getting it. All of our talent were excited to be helping create the software they would be using in the future.
We used Miro for our design sprint board. This enabled everyone to collaborate and for the UX team to produce tangible artifacts from the sprint. This picture here represents three days worth of the sprint.
Rapid Prototype
After the design sprint, we felt like we had a good handle on user needs, user workflows, and a direction for design. So, we went to work for a week on designing a rapid prototype.
First Iteration in Axure
We wanted to provide a more in-depth and interactive prototype, so we built our first wireframe prototype in Axure. This allowed for some complex animations and interactions to be modeled back to the users in the finale session.
It was a whirlwind of a week trying to put this prototype together, but I worked with some outstanding designers who made it happen. We took this back to the users in our finale session to validate the designs and...they hated it. One user said "I think I'm having a panic attack." How could we have missed the mark so much?
Ethnography - Station Visits
As we tried to wrap our heads around what went wrong and why the users didn't like our prototype at all, I started to realize that we hadn't done any ethnographic work. The COVID-19 pandemic made it very difficult to travel and visit stations, but right around this time everything started to open back up. So, I went with a senior designer to studios in Houston and San Antonio, TX to observe morning shows.
Watching our users work live on the air was eye-opening. We quickly realized that there was just too much "in the water" that our users couldn't articulate when asked. There were tons of little things they did every day we had never heard them talk about. We were also amazed at just how much they fly by the seat of their pants! A radio show is truly done "in the moment" with extremely thin margin for error. This is why our interviews failed us in producing a prototype.
We also realized we had been only solving for half the problem. Talent utilize software AND hardware to playout their shows. We thought we would provide them everything they needed through the software, but we realized we had to design for a digital interface that utilized a physical sound board. Below you can see the software we had to redesign/modernize and the physical board below it. There's a lot more to the board, which you can see more of in the video below.
I've put a little video here below so you can see what it was like to observe in the studio.
Brand New Mocks
Armed with our new insights from ethnographic observation, we went back to the drawing board. We decided to keep them as wireframes until we had settled on a design that would actually meet the needs of our users and reflect the reality of a digital/physical hybrid interface.
Medium Fidelity
We started off by moving what we had done in Axure into Figma so that we could more rapidly iterate on the designs. Then, we slowly started moving the pieces into place.
A lot of rapid iteration took place in the next few weeks as we more clearly defined each of the individual modules that made up the whole app. In the screenshot below, you can see a med-fi wireframe where the current song that's playing is highlighted in the playlist. Talent didn't like that at all because they want to always see what's playing right now, but in a playlist they were afraid they could scroll away from it. They also said it just wasn't big enough to see at a glance. Talent work on five or six things at once while running a live show and need to be able to see certain elements from the interface at a quick glance.
Hi-Fi Mocks
After a dozen rapid iterations over the course of a month, we finally landed on hi-fidelity mocks with which users felt comfortable. We also met weekly with the development team to ensure that our designs were not only usable but also feasible. Below you an see some example screens:
We also continued to work with users to iterate on our designs for some of the new more complex features, such as phone systems. Below you can see a working session on the phone systems with two of iHeart's top talent.
With designs finalized, we started building out screens for full prototype usability testing.
Usability Testing
I led all of the usability testing efforts (while also managing the UX team and all of our design efforts across the ecosystem). I decided to begin rapid weekly testing on each of the major modules in the system. By the time we started usability testing, development had already started building the backend and working on the front-end components. With that in mind, I prioritized which modules to test based on development priorities so that we could deliver insights about the usability of our product in an agile way.
We started our usability testing with the phones system because that had the most impending deadline for development. We tested with five users - some of the top talent on the radio in the United States! It was really exciting to meet some of these legends and watch them work through our phones prototypes. Overall, we got excellent feedback and some really interesting insights into their work and why some of our ideas needed to change to better fit it.
Because this project is so huge, it would be impossible to share all of the insights from all of the different pieces of the app. So, I'll just share what we learned from the phones usability testing. We learned:
The text on the card was too small.
The two green buttons to pick up the call were really confusing.
The buttons were too close together and might lead to unintentional misclicks. Accidentally hanging up on a guest on-air could be disastrous for a show.
No one noticed the star rating system until we pointed it out.
So, we iterated on our designs a few times and finally landed on this below:
We did this for each and every module in the live system. After months of work and re-work, we finally launched the product!
We did a lot of user observation after launch with the first live shows and studios that were selected for the MVP release. We used screen recording and YouTube broadcasting to see both what was happening in the UI as well as in the studio and what the talent was doing with the board. Here's a shot of the second live show we did:
I worked at iHeartMedia for 12 weeks during the summer of 2019 as a UX Researcher leading the research and then redesign of an internal application used by sales professionals to recap the value of an adertising campaign for clients. Many in sales didn't use the current app and management wanted to know why as well as what their sales people wanted from such an app moving forward.
We started the summer with several important research questions developed from UX and business stakeholders:
Why aren't account executives using the current campaign recap app?
What drives adoption of a tool like campaign recap?
What do clients expect to see during and after a campaign?
What is the vision for campaign recap and the road ahead?
In order to answer these questions, I began the design discovery phase.
Research and Discovery
I started work on this project by talking with and learning from the business analyst that had been working on the project for the past year and had developed relationships with business stakeholders. I then read as much documentation as I could on the application. Note that each of the following documents are quite a bit longer than the screenshot can show.
After just a week on the job, iHeartMedia flew me to New York City for a week to conduct user interviews with business stakeholders. The following week they flew me to San Francisco. Finally, I returned to corporate HQ in San Antonio to conduct more interviews the following week.
I interviewed people at every level of the company from C-level executives to sales support staff, totaling 17 interviews. For the sales staff, these conversations were in pursuit of answering our research questions outlined above. For the executives, the conversations were to understand the vision of where the company was headed, trends in media, and to get some measure of buy-in from the top.
Each interivew was carefully transcribed and the salient highlights pulled out for analysis of trends. After this was completed for all the interivews, I produced first and second order themes so we could establish patterns in the data, understand our users, and move towards a user experience redesign.
Early Design Artifacts
After completing the interviews and data analysis, we wanted to get a better understanding of our users before creating mock-ups. We therefore pursued persona generation, journey maps, and service blueprints.
Personas
I took the hours of interviews I'd synthesized and developed two different personas for account executives and three different personas for clients. These different personas represent the different types of work each of them do and we needed to be sensitive to each of their needs. Developing these personas helped us to target future design work at real users instead of generic ones. It's important to note that the client personas were mediated through the experience of account executives.
After generating these personas, I was able to interview some agency clients and validate and update the user persona for agency clients.
Journey Map and Service Blueprint
The next step was to create a better understanding of how the users currently conduct their work and the visualize the painpoints into a workflow so we could target those weaknesses in any future design work. Whatever app redesign we envisioned, it had to address user needs (personas) and their workflow (journey map and service blueprint).
A service blueprint, like the one to the right, allowed us to see the AE's workflow, but also to see all the interactions they have with internal and external systems, other users, and see where in the behin-the-scenes process a roadblock can occur.
User Flow and Wireframes
Doing the research helped us better understand the problem we were trying to solve. Creating the design artifacts helped us concretize our findings into tangibile goals. Finally, we were ready to begin creating the redesigned user experience for the future of the app.
User Flow
We started with abstract user flows. How should a user move through this redesigned app? What should their experience be? Visualizing this allowed us to minimize complexity while supporting the full experience desired by our account executives that we interviewed.
Whiteboarding Sessions
After several iterations on the user flow, I sat down with one of the UX/UI designers, Josh, on another project and we had a few whiteboarding sessions to begin to vizualize the user interface while conforming to design patterns already in place at iHeartMedia. I wanted to make sure that I followed the patterns already in place.
During this session is when we had the idea to standardize the UI of the campaign recap tool to fit as part of the flow of campaign management. Account executives should see a campaign as: building, proposing, executing, and finally recapping. This idea helped us focus the user interface and experience with existing design patterns.
Paper Sketches and Rapid Ideation
I followed up the whiteboarding with dozens of paper sketches, allowing for rapid ideation. I didn't want to waste time on creating mock-ups on the computer when the early design was still changing so quickly. It also allowed me to easily explore multiple interaction options where there were no existing design patterns.
The interaction style I chose for the campaign recap builder tool portion of the app drew strongly on PowerPoint or Google Slides. This was because most account executives were already building their recaps manually in either of those tools and whatever the new tool created needed to be able to be exported to a slideshow. I also pulled interaction and design ideas from SquareSpace, specifically how their menu and templates system works.
Early Wireframe Concepts
The wireframe mock-ups were all made in Figma. The early wireframe concepts were still evolving and I needed to put several together to see how they were going to look and feel before deciding. In the examples below, you can see I experimented with which side to put major elements.
Wireframes
After all of the ideation and prototyping, I created the final set of wireframes and built out the entire app for a complete demo for upper-management. This was very well-received and the final decision was made to seek funding for this project so that it could move from mock-ups to a fully-functioning application.
User Validation
Finally, we went back to the same users and managers that we interviewed and showed them our final designs (not shown here). Here's what they had to say:
I worked at Faithlife for 12 weeks during the summer of 2018 as a UX Researcher leading the research and then design on a brand new product, Faithlife Child Check-in. Many churches now use child check-in software to help keep track of the children that parents have entrusted to the care of the church.
We started the summer with three important questions:
What features are the current solutions in the field offering?
What do users (parents, church staff, and children) really want from a child check-in system?
How do users actually use these systems now (regardless of what they're designed for)?
In order to answer these questions, I began the design discovery phase.
Research and Discovery
I conducted initial discovery at Faithlife in conversation with various stakeholders (the CEO, product managers, church products team, UI/UX team lead) which helped give me a sense of this project's scope and interaction with existing Faithlife products. Note that each of these documents are quite a bit longer than the screenshot can show.
First, I performed a competitive analysis of what features our competitors were already offering to get a sense of the product space. I collected information from websites and marketing material from the seven most-popular child check-in solutions currently on the market. This was important to answer our first research question above.
Next, I conducted research with 78 children's ministers through telephone interviews and a Google Forms research survey. I wanted to find out what users want, what frustrates them about their current experience, and how they actually use their current software solution. This helped answer research questions two and three.
I took the research data and processed it into themes and patterns so that we could get a "big picture" view of the trends we were seeing from 30,000 feet up. Since I collected data from churches of all sizes - from less than 100 to more than 5,000 members - it was important to show the data in different ways. I broke out the data by church size, by current software solution, and current overall satisfaction level.
Finally, I could answer "what problem are we actually trying to solve?" and "what should our solution be like?" which allowed me to generate a user experience flow for each user type at every point of interaction. This recieved virogous discussion through collaborative comments with the various stakeholders, as well as other members of the UI/UX team.
UX Design
After coming to a solution about the design of the user experience flow (i.e. what each type of user needs to be able to do at every step), we began thinking through the design of the mobile app by creating flow charts, white board sketches, and finally several iterations of mock-ups.
UX Flow Charts
The first step into design was to translate the user experience flow document into a ux flow chart with each screen being represented and thinking about how one moves from screen to screen in the app.
Whiteboard Mock-ups
Next, we spent hours iterating on whiteboard mock-ups for each screen of the mobile app. This helped us to visualize what the experience should look like and helped us to realize where the design called for exisiting Faithlife UX patterns and where it called for the creation of new ones. Below you can see some of those whiteboard mock-ups:
Low-Fidelity Mock-ups
At last, we began to create the user interface with low-fidelity mock-ups with Sketch. These were shown to various stakeholders inside the company (CEO, product managers, church products team, UI/UX team) and refined iteratively until we were confident we were ready for our first user tests. You can see some of the low-fidelity mock-ups below:
High-Fidelity Mock-ups
Finally, through iterative design, we arrived at high-fidelity mock-ups. We used these mock-ups for our user testing sessions. You can see some of the high-fidelity mock-ups below:
Parent Mock-ups
Teacher Mock-ups
User Testing
In the final week of my time at Faithlife, we conducted two user tests with our prototype using Adobe InVision. I only had time enough left for two in-depth user tests, but I'm glad we got to it because it helped us glean some very helpful insights for the next iteration.
Participant 1 has been a children's minister for 35 years and is in her mid-50's. Participant 2 has been a children's minsiter for just 2 years and is in his mid-20's. These two user tests provided very different observations that helped us better understand our user base, their needs, and generation gaps in technology usage. Participant 1 took the entire 30 minutes to navigate the user testing tasks. Participant 2 only took 12 minutes, probably due to his familiarity with smartphone apps. I used the rest of our time with Participant 2 to drill down into his work flow and Sunday morning routine.
Each user test was transcribed with speech from each person color coded and positioned opposite of a screenshot of what the user was looking at when they made the comment. After transcription was completed, I made thorough notes that highlighted the most important and salient discoveries from that session.
After completing all of this, I was able to hand off the project to one of the UX designers with whom I had worked extensively during my time at Faithlife, confident that she was well-prepared to carry the product forward and create a final spec.
Dissertation: Beyond Automated Assessment: Building Metacognitive Awareness in Novice Programmers in CS1
May 2018
Master of Arts in Old Testament
Abilene Christian University
Second Temple Judaism, Dead Sea Scrolls, Hebrew
Thesis: (En)visioning Resistance: Applications of the Battlefield Myth in the War Scroll as a Window into the Theological Development of the Community (link)
May 2015
Master of Divinity
Abilene Christian University
Ministry, Greek, New Testament Textual Criticism, Syriac
May 2014
Master of Science in Applied Cognition and Neuroscience
I conduct research on a variety of topics. As a human-computer interaction researcher, I'm interested in how humans use technology, which can take place in many different contexts. Right now I'm working on novice programmer interaction with large language models, metacognition in novice programmers, human factors and design of programming error messages, and usable privacy and security. I am also the sponsor and research mentor for the SIGCHI Local Chapter at ACU, which conducts research that I oversee.
My Research
2024
The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers [forthcoming] (August, 2024) James Prather, Brent N. Reeves, Juho Leinonen, Stephen MacNeil, Arisoa S. Randrianasolo, Brett A. Becker, Bailey Kimmel, Jared Wright, Ben Briggs Proceedings of the 2024 Conference on International Computing Education Research (ICER '24) Peer-reviewed publication with 20% acceptance rate
Evaluating Contextually Personalized Programming Exercises Created with Generative AI [forthcoming] (August, 2024)
Evanfiya Logacheva, Arto Hellas, James Prather, Sami Sarsa, Juho Leinonen Proceedings of the 2024 Conference on International Computing Education Research (ICER '24) Peer-reviewed publication with 20% acceptance rate
Explaining Code with a Purpose: An Integrated Approach for Developing Code Comprehension and Prompting Skills [forthcoming] (July, 2024)
Paul Denny, David Smith, Max Fowler, James Prather, Brett Becker and Juho Leinonen
Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’24) Peer-reviewed publication with 27% acceptance rate
Self-Regulation, Self-Efficacy, and Fear of Failure Interactions with How Novices Use LLMs to Solve Programming Problems [forthcoming] (July, 2024)
Lauren Margulieux, James Prather, Brent Reeves, Brett Becker, Gozde Cetin Uzun, Dastyni Loksa, Juho Leinonen and Paul Denny
Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’24) Peer-reviewed publication with 27% acceptance rate
“Backseat Gaming" An Interview Study on Co-Regulated Learning within a Collegiate Male Esports Community [forthcoming](May, 2024)
Erica Kleinman, Reza Habibi, Garrett Powell, Brent N. Reeves, James Prather, Magy Seif El-Nasr Proceedings of the 2024 ACM Conference on Human Factors in Computing Systems (CHI'24) Peer-reviewed publication with 26.3% acceptance rate
Prompt Problems: A New Programming Exercise for the Generative AI Era [forthcoming] (March, 2024)
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett Becker, Brent Reeves
Proceedings of the 2024 ACM Technical Symposium on Computer Science Education (SIGCSE '24) Peer-reviewed publication with 33% acceptance rate
Solving Proof Block Problems Using Large Language Models [forthcoming] (March, 2024)
Seth Poulsen, Sami Sarsa, James Prather, Juho Leinonen, Brett Becker, Arto Hellas, Paul Denny, Brent Reeves
Proceedings of the 2024 ACM Technical Symposium on Computer Science Education (SIGCSE '24) Peer-reviewed publication with 33% acceptance rate
Computing Education in the Era of Generative AI [link] (February, 2024)
Paul Denny, James Prather, Brett Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent Reeves, Eddie Antonio Santos, and Sami Sarsa Communications of the ACM (CACM) Contributed Article (peer-reviewed journal article)
"It's Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers [link] (February, 2024) James Prather, Brent Reeves, Sami Sarsa, Paul Denny, Brett Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos ACM Transactions on Computer-Human Interaction (TOCHI), Vol 31, Iss. 1 Peer-reviewed journal
2023
The Robots are Here: Navigating the Generative AI Revolution in Computing Education [link] (December, 2023) James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Peterson, Raymond Pettit, Brent N. Reeves, Jaromir Savelka
Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’23) Peer-reviewed publication
Evaluating the Performance of Code Generation Models for Solving Parsons Problems With Small Prompt Variations [link] (July, 2023)
Brent Reeves, Sami Sarsa, James Prather, Paul Denny, Brett Becker, Arto Hellas, Bailey Kimmel, Garrett Powell and Juho Leinonen
Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’23) Peer-reviewed publication with 27% acceptance rate
Programming Is Hard - Or at Least It Used to Be: Educational Opportunities And Challenges of AI Code Generation [link] (March, 2023)
Brett Becker, James Prather, Paul Denny, Andrew Luxton-Reilly, James Finnie-Ansley, and Eddie Antonio Santos
Proceedings of the 2023 ACM Technical Symposium on Computer Science Education (SIGCSE '23) Peer-reviewed publication with 29% acceptance rate
Best Paper Award
Using Large Language Models to Enhance Programming Error Messages [link] (March, 2023)
Juho Leinonen, Brett Becker, Paul Denny, Arto Hellas, James Prather, Brent Reeves, and Sami Sarsa
Proceedings of the 2023 ACM Technical Symposium on Computer Science Education (SIGCSE '23) Peer-reviewed publication with 29% acceptance rate
First Steps towards Predicting Novice Understanding of Programming Error Messages [link] (March, 2023) James Prather, Paul Denny, Brett Becker, Arisoa Randrianasolo, Robert Nix, Garrett Powell, and Brent Reeves
Proceedings of the 2023 ACM Technical Symposium on Computer Science Education (SIGCSE '23) Peer-reviewed publication with 29% acceptance rate
My AI Wants to Know if this Will Be On the Exam: Testing OpenAI's Codex on CS2 Programming Exercises [link] (January, 2023)
James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett Becker
Proceedings of the Australasian Computing Education Conference (ACE '23) Peer-reviewed publication with 43% acceptance rate
Best Paper Award
2022
Parsons Problems and Beyond: Systematic Literature Review and Empirical Study Designs [link] (December, 2022)
Barbara Ericson, Paul Denny, James Prather, Rodrigo Duran, Arto Hellas, Juho Leinonen, Craig Miller, Briana Morrison, Jan Pearce and Susan Rodger
Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’22) Peer-reviewed publication
Getting By With Help From My Friends: Group Study in Introductory Programming Understood as Socially Shared Regulation [link] (August, 2022) James Prather, Lauren Margulieux, Jacqueline Whalley, Paul Denny, Brent Reeves, Brett Becker, Paramvir Singh, Garrett Powell, and Nigel Bosch Proceedings of the 2022 Conference on International Computing Education Research (ICER '22) Peer-reviewed publication with 14% acceptance rate
Scaffolding Task Planning Using Abstract Parsons Problems [forthcoming] (August, 2022) James Prather, John Homer, Paul Denny, Brett Becker, John Marsden and Garrett Powell Proceedings of the 2022 World Conference on Computers in Education (WCCE '22) Peer-reviewed publication
Metacognition and Self-Regulation in Programming Education: Theories and Exemplars of Use [link] (Q4, 2022)
Dastyni Loksa, Lauren Margulieux, Brett A. Becker, Michelle Craig, Paul Denny, Raymond Pettit, and James Prather ACM Transactions on Computing Education (TOCE) Peer-reviewed journal
From the Horse’s Mouth: The Words We Use to Teach Diverse Student Groups Across Three Continents [link] (March, 2022)
Brett A. Becker, Paul Denny, Daniel Gallagher, James Prather, Colleen Gostomski, Kelli Norris, and Garrett Powell Proceedings of the 2022 ACM Technical Symposium on Computer Science Education (SIGCSE '22) Peer-reviewed publication with 29% acceptance rate
Novice Reflections During the Transition to a New Programming Language [link] (March, 2022)
Paul Denny, Brett A. Becker, Nigel Bosch, James Prather, Brent Reeves, and Jacquelin Whalley Proceedings of the 2022 ACM Technical Symposium on Computer Science Education (SIGCSE '22) Peer-reviewed publication with 29% acceptance rate
The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming [link](February, 2022)
James Finnie-Ansley, Paul Denny, Brett Becker, Andrew Luxton-Reilly, and James Prather Proceedings of the 2022 ACM Australasian Computing Education Research Conference (ACE '22) Peer-reviewed publication with 38% acceptance rate
Best Paper Award
2021
On Designing Programming Error Messages for Novices: Readability and its Constituent Factors [link](May, 2021)
Paul Denny, James Prather, Brett A. Becker, Catherine Mooney, John Homer, Zachary Albrecht, Garrett Powell Proceedings of the 2021 ACM Conference on Human Factors in Computing Systems (CHI'21) Peer-reviewed publication with 26.3% acceptance rate
Towards Assessing the Readability of Programming Error Messages [link](February, 2021)
Brett A. Becker, Paul Denny, James Prather, Raymond Pettit, Robert Nix, Catherine Mooney Proceedings of the 2021 ACM Australasian Computing Education Research Conference (ACE '21) Peer-reviewed publication with 35% acceptance rate
2020
What Do We Think We Think We Are Doing?: Metacognition and Self-Regulation in Programming [link] (August, 2020) James Prather, Brett A. Becker, Michelle Craig, Paul Denny, Dastyni Loksa, Lauren Margulieux Proceedings of the 2020 ACM Conference on International Computing Education Research (ICER '20) Peer-reviewed publication with 22% acceptance rate
Best Paper Award
Error Message Readability and Novice Debugging Performance [link] (June, 2020)
Paul Denny, James Prather, Brett A. Becker Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’20) Peer-reviewed publication with 28% acceptance rate
Best Paper Finalist
2019
Compiler Error Messages Considered Unhelpful: The Landscape of Text-Based Programming Error Messages [link] (December, 2019)
Brett A. Becker, Paul Denny, Raymond Pettit, Durell Bouchard, Dennis J. Bouvier, Brian Harrington, Amir Kamil, Amey Karkare, Chris McDonald, Peter-Michael Osera, Janice L. Pearce, and James Prather. Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’19) Peer-reviewed publication
A Closer Look at Metacognitive Scaffolding: Solving Test Cases Before Programming [link] (November, 2019)
Paul Denny, James Prather, Brett Becker, Zachary Albrecht, Dastyni Loksa and Raymond Pettit Proceedings of the 19th Koli Calling International Conference on Computing Education Research (Koli Calling '19) Peer-reviewed publication with 37% acceptance rate
Unexpected Tokens: A Review of Programming Error Messages and Design Guidelines for the Future [link] (July, 2019)
Brett A. Becker, Paul Denny, Raymond Pettit, Durell Bouchard, Dennis J. Bouvier, Brian Harrington, Amir Kamil, Amey Karkare, Chris McDonald, Peter-Michael Osera, Janice L. Pearce, and James Prather. Proceedings of the Innovation and Technology in Computer Science Education conference (ITiCSE ’19) Peer-reviewed publication
BEST PAPER AT SIGCSE 2019 IN THE CS EDUCATION TRACK: First things first: providing metacognitive scaffolding for interpreting problem prompts [link] (June, 2019) James Prather, Raymond Pettit, Brett Becker, Paul Denny, Dastyni Loksa, Alani Peters, Zachary Albrecht, and Krista Masci ACM Inroads 10 (2), 42-49 Re-print in official ACM magazine
First Things First: Providing Metacognitive Scaffolding for Interpreting Problem Prompts [link] (February, 2019) James Prather, Raymond Pettit, Brett Becker, Paul Denny, Dastyni Loksa, Alani Peters, Zachary Albrecht, and Krista Masci Proceedings of the 2019 ACM Technical Symposium on Computer Science Education (SIGCSE) Peer-reviewed publication with 32% acceptance rate.
1st Place Best Paper Award
2018
Metacognitive Difficulties Faced by Novice Programmers in Automated Assessment Tools [link] (August, 2018) James Prather, Raymond Pettit, Kayla McMurry, Alani Peters, John Homer, and Maxine Cohen Proceedings of the 2018 ACM Conference on International Computing Education Research (ICER) Peer-reviewed publication with 22% acceptance rate.
Beyond Automated Assessment: Building Metacognitive Awareness in Novice Programmers in CS1 [link] (May, 2018) James Prather Doctoral dissertation Committee Members: Maxine Cohen (chair), Raymond Pettit, and Michael Laszlo
2017
On Novices’ Interaction with Compiler Error Messages: A Human Factors Approach [link] (August, 2017) James Prather, Raymond Pettit, Kayla McMurry, Alani Peters, John Homer, Nevan Simone, and Maxine Cohen Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER) Peer-reviewed publication with 27% acceptance rate.
Trust Management for Cheating Detection in Distributed Massively Multiplayer Online Games [link] (June, 2017) James Prather, Robert Nix, Ryan Jessup 15th Annual Workshop on Network and Systems Support for Games (NetGames) in cooperation with ACM SIGMM and ACM SIGCOMM Peer-reviewed publication with 33% acceptance rate.
Automated Assessment Tools: Too Many Cooks, Not Enough Collaboration [link] (March, 2017)
Raymond Pettit and James Prather Journal of Computing Sciences in Colleges 32(4) Peer-reviewed publication.
Student Research from My Lab
The following papers were written by undergraduate students that I mentored or from the SIGCHI local chapter that I sponsor.
Enhancing Programming Error Messages in Real Time with Generative AI(May, 2024)
Bailey Kimmel, Austin Geisert, Lily Yaro, Brendan Gipson, Taylor Hotchkiss, Sidney Osae-Asante, Hunter Vaught, Grant Wininger, Chase Yamaguchi Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '24) Peer-reviewed publication with 38% acceptance rate
The Emotional Impact of Multi-Factor Authentication for University Students(May, 2022)
Davis Arnold, Benjamin Blackmon, Brendan Gipson, Anthony Moncivais, Garrett Powell, Megan Skeen, Michael Thorson, Nathan Wade Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22) Peer-reviewed publication with 50% acceptance rate
3rd Place Student Research Competition
Learned Helplessness and Mental Health Issues Related to Distance Learning Due to COVID-19(May, 2021)
Adam Garcia, Garrett Powell, Davis Arnold, Luis Ibarra, Matthew Pietrucha, Michael Thorson, Abigail Verhelle, Nathan Wade, Samantha Webb Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21) Peer-reviewed publication with 50% acceptance rate
Facts and Stories in Phishing Training: A Replication and Extension(April, 2020)
John Marsden, Zachary Albrecht, Paula Berggren, Jessica Halbert, Kyle Lemons, Anthony Moncivais, Matthew Thompson Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20) Peer-reviewed publication with 29% acceptance rate
Using Parsons Problems to Scaffold Metacognitive Awareness(March, 2020)
John Marsden Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE '20) Peer-reviewed publication
Print (“Good Luck!”): Measuring the Effect of Autogenerated Social Encouragement on Student Anxiety(April, 2018)
Collin Blanchard, Holly Buff, Travis Cook, Raquel Dottle, Gideon Luck, Alani Peters, Virginia Pettit, Isaak Ramirez, and Jessica Wininger Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18) Peer-reviewed publication with 50% acceptance rate
The Role of Chronology in Analyzing Introductory Programming Assignments(Feb, 2016)
Kayla Holcomb McMurry, Nevan Simone Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16) Peer-reviewed publication
1st Place Student Research Competition
Monographs
The Gospel of John According to the Syriac Peshitta Version with English Translation(2014)
Jeff W. Childers and James Prather Gorgias Press, Piscataway, NJ Invited monograph
Interests
I'm a husband and father of four kids, so when I'm not at work, I'm usually hanging out with my family, going on adventures with them, or travelling. I love travelling. My wife and I were study abroad sponsors to Oxford during the summer of 2017 (I kept a daily travel blog if you'd like to see what that was like). I've travelled internationally to attend conferences and present research. I'm also a committed Christian who believes in living the words of Jesus ("love your neighbor as yourself"), the words of the prophets ("do not oppress the widow, the orphan, the alien, or the poor"), and the words of Paul ("do nothing from selfish ambition or conceit, but in humility regard others as better than yourselves"). I think Christianity calls for inclusion and Jesus models that behavior. I enjoy a good theological discussion. I also enjoy combining my two interests, Computer Science and religion, into work in Digital Humanities and am part of a three-year grant project to perform data analysis on ancient Ethiopic manuscripts of the Old Testament.
In the rare moment that I have downtime, I enjoy playing video games with my students (mostly Blizzard's Overwatch), playing D&D with friends, reading a good fantasy/steampunk/sci-fi novel, or writing my own novel (yeah, everyone is writing a novel).