# Automated Classification and Feedback Generation for Programming Assignments

Online-learning platforms (e.g., Coursera, Khan Academy, edX) are very popular and massive open online courses (MOOCs) have attracted thousands of students. We study the following reserach problem : in the traditional classroom setting students are given feedback (hints) by an instructor; in MOOCs this is not possible, since there typically is only a small number of instructors. For introductory programming assignments, students require feedback on different aspects of their code:

1. Correctness: guidance for writing a correct program.
2. Performance: help for improving the performance of an already correct program.

We have developed a tool that classifies student programs based on their high-level idea (which we also call algorithmic strategy); this classification enables us to automatically give feedback on correctness and performance: By recognizing the high-level idea of the student we can say “good job, your code is efficient!” or give a suggestion how to improve the code. Similarly, by recognizing the high-level idea of the student we can compare the student’s code to a correct program with the same high-level idea, and based on the differences suggest how to repair the program. Examples below illustrate these ideas:

Assignment: Decide if two strings (s and t) are anagrams (two strings are anagrams if they can be permuted to become equal).
 ```bool Puzzle(string s, string t) { var sa = s.ToCharArray(); var ta = t.ToCharArray(); Array.Sort(sa); Array.Sort(ta); return sa.SequenceEqual(ta); }``` ```bool Puzzle(string s, string t) { return s.All(c => s.Where(c2 => c2 == c) .Count() == t.Where(c2 =>c2 == c) .Count()); }``` ```bool Puzzle(string s, string t) { if (s.Length != t.Length) return false; char[] sa = s.ToCharArray(); char[] ta = t.ToCharArray(); for (int j=0; j < sa.Length; j++) { for (int i=0; i < sa.Length - 1;i++) { if (sa[i] < sa[i+1]){ char temp=sa[i]; sa[i]=sa[i+1]; sa[i+1]=temp; } if (ta[i] < ta[i+1]){ char temp=ta[i]; ta[i] = ta[i+1]; ta[i+1] = temp; } } } for (int k = 0; k < sa.Length; k++) { if (sa[k] != ta[k]) return false; } return true; }``` ```bool Puzzle(string s, string t) { if (s.Length != t.Length) return false; foreach (Char ch in s.ToCharArray()){ if (countChars(s, ch) != countChars(t, ch)){ return false; } } return true; } int countChars(String s, Char c){ int number = 0; foreach (Char ch in s.ToCharArray()){ if (ch == c){ number++; } } return number; }``` Feedback: “Instead of sorting, compare the number of characters in strings”. Feedback: “Count the characters in a pre-processing phase, instead of in a loop”.

These four examples show correct, but inefficient programs (all have complexity O(n2), while there exists a solution with complexity O(n) — do you know what it is?). The programs on the right use a different high-level idea (aka algorithmic strategy) than the programs on the left (counting character occurences instead of sorting). Our tool classifies programs based on the used algorithm, and despite their syntactic differences gives the same feedback for the programs on the left resp. on the right.

Assignment: Read two integers 0 <= n <= m, and count the Fibonacci numbers in the closed interval [n,m].
 ```int main() { long a=0, b=1,c=0,cnt=0; long int n,m; scanf("%ld%ld",&n,&m); while (c<=m) { c=a+b; if ((c>=n)&&(c<=m)) cnt=cnt+1; a=b; b=c; } printf("%d",cnt); }``` ```int main() { long int n,m,f,n1=2,n2=3; int c=0; scanf("%ld %ld",&n,&m); while(f<=m) { f=n1+n2; n1=f; n2=n1; if(f>=n && f<=m) c++; } printf("%d",c); }``` Correct program Incorrect program; feedback: "1. Assign 0 to n2 and f, and 1 to n1 before the loop; 2. Move the statement n2=n1; before n1=f; in the loop."
Our tool computes the smallest number of changes that transform the incorrect program on the right into the the correct program on the left, resulting in the feedback shown above. In fact, our tool computes such a repair with regard to every correct program in the database and reports only the repair that is best (smallest number of edits) with regard to all correct programs.

Classification and feedback generation is a very active area of research, and our goal is to develop techniques and tools to improve the state-of-the-art in automated programming education. If you find these results exciting, we can offer you a variety of topics for bachelor/master theses, projects for practical course work, and student jobs.

[Contact Florian Zuleger, Ivan Radiček]

For more information, also check our recent paper on this topic: Feedback Generation for Performance Problems in Introductory Programming Assignments.

## Helmut Veith Stipend Award Ceremony

The Vice Rector for Academic Affairs of TU Wien, Kurt Matyas, will award the scholarship recipient of the Helmut Veith Stipend at the award ceremony on Friday, April 06, 2018 in the Kontaktraum, starting at 17:05.

## FORSYTE organizes CAV 2018

The FORSYTE group is co-organizing the 30th International Conference on Computer Aided Verification (CAV)

## FMSD Special Issue in Memoriam Helmut Veith

In memory of Helmut Veith, the founder of the FORSYTE research group, the current issue of the Journal on Formal Methods in System Design is a Special Issue in Memoriam Helmut Veith.

## FRIDA’17 at DISC

We have a great program at FRIDA’17 that takes place in Vienna

## Helmut Veith Stipend 2017: Deadline Extension (November 30)

The application deadline for the Helmut Veith Stipend 2017 has been extended to November 30.