Behavior-based Clustering of Visual Code — Christopher Scaffidi

Despite the efforts of the Scratch team and their animation library, reuse among Scratch users (and end-users in general) remains very low. You can search through the projects, but you need to know the right words. Different from code in text, where you have variable and method names that make it easier to search.

But in research there are some efforts that could solve this, among which the tools by Katie Stolee for Yahoo! Pipes, and work aimed at professional developers too.

2015-10-21 16.05.30

Chris wants to make a better search engine. Therefore he studied what kinds of barriers end-user programmers face. He recruited 20 high-school students and had them over for a programming camp. He then used Andy Ko’s barriers and coded the questions they asked into them:

2015-10-21 16.10.15

Interesting enough, design and coordination barrier did not occur, that could be due to the fact that the barriers were designed for VBA and Scratch is somewhat different. In conclusion: finding the right Scratch primitive is hard! (This can be compared to the ‘realization moment from another talk here (will have to look up the reference later))

To address this, Chris will make a new type of search engine, that will result multiple ways to do something, each representing a class of different programs. There are several challenges that are going to be solved for this:

How to summarize the meaning of a script?

They use a TFIDF like model that is used in text document clustering. They that tried a number of unsupervised learning algorithm and it turned out that X-means was the best fit (which is like K-means but you don’t have to tell it the number of clusters you want) After some tweaking they managed to get 59 useful clusters.

2015-10-21 16.17.28

 

Evaluation

These clusters were then evaluated with 16 computer science students, who were taught Scratch. Chris identified 6 topics that Scratch users typically want to learn about, gathered 30 related scripts and asked users to rank the similarities. It turns out that the participants indeed rate programs that look more similar according to TFIDF look similar to people too.

The plan now is to summarize the clusters to generate suggestions. Great work!

Leave a Comment

Your email address will not be published. Required fields are marked *