Reid Holmes

 
Prospective Graduate Students / Postdocs

This faculty member is currently not actively recruiting graduate students or Postdoctoral Fellows, but might consider co-supervision together with another faculty member.

Professor

Research Classification

Research Interests

computer science
open source software
software comprehension
software development tools
software engineering
software quality
software testing
static analysis

Relevant Thesis-Based Degree Programs

Affiliations to Research Centres, Institutes & Clusters

 
 

Graduate Student Supervision

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Designing integrated development environments for all ages through tinkering (2023)

As programming becomes more ubiquitous, development environments must accommodate a more diverse set of users. Considering how to better support users of different ages and genders who program is an important first step for designing more inclusive development environments. Our research takes a step towards that goal as we evaluated how 91 end users (97% women) programmed in a live programming environment that we created and designed to support tinkering. The diverse ages of the participants, aged 19-29 to over 70, allowed us to identify trends in how differently-aged participants worked to complete their tasks. In addition to task performance, we analyzed the participants' thoughts and emotional responses towards the environment's features to learn what aspects of the environment they found insightful, confusing, and encouraged experimentation. We found that while older participants were less successful than younger participants, participants of all ages were more likely to succeed if they iterated more and decomposed tasks into partially correct programs. Additionally, users found the environment engaging and favoured visual feedback when making progress and when stuck. Our results provide insights into how development environments can be designed to more inclusively support a broader set of end user programmers.

View record

Evaluating the quality of student-written software tests with curated mutation analysis (2023)

An important learning outcome in software engineering education is the ability to write an effective test suite that rigorously tests a target application. The standard approach for assessing test suites is to check coverage which can be problematic because coverage rewards code invocation, without checking test assertion correctness. Mutation Analysis (injecting a small fault into a clone of a codebase) has been used in both industry and academia to check test suite quality. A mutant is killed if any tests in the test suite fail on the clone. More mutants killed indicates a stronger suite, as it is more sensitive to defects. Mutation Analysis has been limited in an educational setting because ofthe prohibitive cost in both time and compute power to run the students' suites over all generated clones. We employed Mutation Analysis to assess test suite quality in our upper-year Software Engineering course at a large research intensive university. This paper makes two contributions: (1) We show that it is feasible and effective to use a small sample of hand-written mutants for grading, and (2) We assess effectiveness for promoting student learning by comparing students graded with coverage to those graded with Mutation Analysis.We found that mutation graded students write more correct tests, check more of the behaviour of invoked code, and more actively seek to understand the project specification.

View record

Extract : informing microservice extraction decisions from the bottom-up (2023)

Microservice architectures are a popular approach for modernizing legacy monolithic systems. However, transforming an existing monolith into microservices is a complex task for developers. In practice, developers typically perform the decomposition iteratively, extracting one service at a time. Unfortunately, most existing tools do not adequately support such extraction tasks, as they aim to transform a monolith into microservices architecture at once. This paper presents Extract, a prototype tool that aligns with the developer's individual workflow for performing a single microservice extraction task. Extract empowers developers to analyze the particular classes and methods within the monolith that they intend to migrate to a new microservice. Moreover, it offers human-in-the-loop support, allowing developers to make well-informed decisions regarding the extraction process and explore alternative designs. We evaluate Extract with four case studies involving open-source monolithic applications. These studies serve to demonstrate how the approach aids developers in making practical decisions during the extraction process. Furthermore, we compare the outcomes of Extract's service designs with manually decomposed microservices for further analysis.

View record

Team harmony before, during, and after COVID-19 (2022)

This work looks at the team harmony experience of pairs in CPSC 310 a large ( ̃300 person) third-year Software Engineering class.For the last seven semesters we asked students to regularly report their sense of equity relating to their contributions to group discussions, influence over task assignments, and overall contributions to their course project development. We examine responses from four periods: prior to COVID-19, during the transitional period as restrictions were applied due to COVID-19, during COVID-19, and after the acute COVID-19 period had ended and restrictions were lifted. Overall, we saw that students experienced a decrease in team harmony during the transition to lockdown, that harmony recovered in subsequent semesters, with some measures gradually trending worse over time in the post-pandemic period (once the restrictions were lifted).

View record

Automated human-in-the-loop assertion generation (2021)

Test cases use assertions to check program behaviour. While these assertions may not be complex, they are themselves code and must be written correctly in order to distinguish between passing and failing test cases.I assert that test assertions are relatively repetitive and straight-forward, making their construction well suited to automation; and that tools can reduce developer effort (and simultaneously improve the quality of the assertions in their test suites) by automatically generating assertions that the tester can choose to accept, modify, delete, or augment.Such a tool can fit into a developer workflow where tests are frequently written alongside runnable source code.I examined 33,873 assertions from 105 projects and identified twelve high-level categories that account for the vast majority of developer-written test assertions, confirming that test assertions are fairly simple in practice.To assess the utility of my human-in-the-loop assertion generation thesis, I developed the AutoAssert framework, which generates typical assertions for test cases written for JavaScript code. AutoAssert uses dynamic analysis to determine both which assertions to generate and what values they should verify. The developer can choose to accept, modify, delete, or add to the set of generated assertions.I compared assertions generated by AutoAssert to those written by developers and found that it generates the same kind of assertions as written by developers 84\% of the time in a sample of over 1,000 assertions.Additionally I validated the utility of AutoAssert-generated assertions with 17 developers; these developers found that the majority of generated assertions were useful and expressed considerable interest in using such a tool/approach for their own projects.

View record

Investigating the impact of methodological choices on source code maintenance analyses (2021)

Many prediction models rely on past data about how a system evolves to learn and anticipate the number of changes and bugs it will have in the future. As a software engineer or data scientist creates these models, they need to make several methodological choices such as deciding on size measurements, whether size should be controlled, from what time range metrics should be obtained, etc. In this work, we demonstrate how different methodological decisions can cause practitioners to reach conclusions that are significantly and meaningfully different. For example, when measuring SLOC from evolving source code of a method, one could decide to use the initial, median, average, final, or a per-change measure of method size. These decisions matter; for instance, one prior study observed better performance of code metrics for defect prediction in general, while another study found negative results when performance was evaluated through a time-based approach. Our result identifies the reason behind this contradiction is due to the age of the methods not being explicitly controlled, where the first six months of a method’s evolution could have provided a better understanding of maintenance. Understanding the impact of these different methodological decisions is especially important given the increasing significance of approaches that use these large datasets for software analysis tasks. This work can impact both practitioners and researchers by helping them understand which of the methodological choices underpinning their analyses are important, and which are not; this can lead to more consistency among research studies and improved decision making for deployed analyses.

View record

Augmenting source code editors with external information (2019)

Developers use many tools and services to acquire code-centric information in their development processes. These tools, such as continuous integration and coverage platforms, make the development process fluid and help developers make better decisions, but developers often need to leave their code editors to retrieve the in- formation from them. In this thesis, I present External Information in Code Editor (EICE), an external information integration technology probe, to understand how to help developers acquire and present information from external sources. In this thesis, I explore the kinds of external information developers need in their development process and how to present it. As visually changing code editors can affect developers' interaction and productivity, I conduct an online survey to understand developers' preferences on visual representations of external information. Finally, I investigate the benefits and disadvantages of EICE plugins in a user study. The results demonstrate that with careful design, the integration of external information into source code editors can be beneficial to developers.

View record

Automatic conceptual window grouping with frequent pattern matching (2019)

While working, software developers constantly switch between different projects and tasks and use many different applications, web resources and files. These diverse resources are scattered across many windows and lead to cluttered workspaces that can distract developers in their workflows. Having mechanisms to determine which resources belong together for working on a project, would allow us to develop tools that could support developers in organizing their work, declutter their workspace and switch between projects. Existing approaches in this area often either require users to manually define which resources belong together, or do not examine how users would group the resources themselves and how to best support them.In this thesis we present an approach that automatically detects groups of applications and resources that developers use and are relevant to the tasks and projects they are working on. These groups are referred to as Conceptual Groups. The approach applies frequent pattern analysis on recorded interaction data and clusters these to retrieve conceptual groups. To measure the accuracy of our approach, we conducted a study with 11 participants and compared it to existing approaches which were outperformed by up to 50%.

View record

CodeShovel: constructing robust source code history (2019)

Source code histories are valuable resources for developers, and development tools, to reason about the evolution of their software systems.Through a survey with 42 professional software developers, we gained insight in how they use the history of their projects and what challenges they face while doing so.We discovered significant mismatches between the output provided by developers' existing approaches and what they need to successfully complete their tasks.To address these shortcomings, we created CodeShovel, a tool for navigating method histories that is able to quickly produce complete method histories in 90% of the cases. CodeShovel enables developers to navigate the entire history of source code methods quickly and reliably, regardless of the transformations and refactorings the method has undergone over its lifetime, helping developers build a robust understanding of its evolution.A field study with 16 industrial developers confirmed our empirical findings of CodeShovel's correctness and efficiency and additionally showed that our approach can be useful for a wide range of industrial development tasks.

View record

Supporting focused work on window-based desktops (2019)

When working with a computer, information workers continuously switch tasks and applications to complete their work. Given the high fragmentation and complexity of their work, staying focused on the relevant pieces of information can become quite challenging in today's window-based environments, especially with the ever increasing size of display technology. To support workers in staying focused, we conducted a formative study with 18 professional information workers in which we examined their computer based and eye gaze interaction with the window environment and devised a relevance model of open windows. Based on the results, we developed a prototype to dim irrelevant windows and reduce distractions, and evaluated it in a user study. Our results show that participants keep an average of 12 windows open at all times, switch windows every 17 seconds, and that our prototype was able to predict and highlight relevant open windows with high accuracy and was considered helpful by the users.

View record

Context-aware conversational developer assistants (2018)

Building and maintaining modern software systems requires developers to perform a variety of tasks that span various tools and information sources. The crosscutting nature of these development tasks requires developers to maintain complex mental models and forces them (a) to manually split their high-level tasks into low-level commands that are supported by the various tools, and (b) to (re)establish their current context in each tool. In this thesis I present Devy, a Conversational Developer Assistant (CDA) that enables developers to focus on their high-level development tasks. Devy reduces the number of manual, often complex, low-level commands that developers need to perform, freeing them to focus on their high-level tasks. Specifically, Devy infers high-level intent from developer's voice commands and combines this with an automatically-generated context model to determine appropriate workflows for invoking low-level tool actions; where needed, Devy can also prompt the developer for additional information. Through a mixed methods evaluation with 21 industrial developers, we found that Devy provided an intuitive interface that was able to support many development tasks while helping developers stay focused within their development environment. While industrial developers were largely supportive of the automation Devy enabled, they also provided insights into several other tasks and workflows CDAs could support to enable them to better focus on the important parts of their development tasks.

View record

Enabling configuration self-adaptation using machine learning (2018)

Due to advancements in distributed systems and the increasing industrial demands placed on these systems, distributed systems are comprised of multiple complex components (e.g databases and their replication infrastructure, caching components, proxies, and load balancers) each of which have their own complex configuration parameters that enable them to be tuned for given runtime requirements. Software Engineers must manually tinker with many of these configuration parameters that change the behaviour and/or structure of the system in order to achieve their system requirements. In many cases, static configuration settings might not meet certain demands in a given context and ad hoc modifications of these configuration parameters can trigger unexpected behaviours, which can have negative effects on the quality of the overall system.In this work, I show the design and analysis of Finch; a tool that injects a machine learning based MAPE-K feedback loop to existing systems to automate how these configuration parameters are set. Finch configures and optimizes the system to meet service-level agreements in uncertain workloads and usage patterns. Rather than changing the core infrastructure of a system to fit the feedback loop, Finch asks the user to perform a small set of actions: instrumenting the code and configuration parameters, defining service-level objectives and agreements, and enabling programmatic changes to these configurations. As a result, Finch learns how to dynamically configure the system at runtime to self-adapt to its dynamic workloads.I show how Finch can replace the trial-and-error engineering effort that otherwise would be spent manually optimizing a system's wide array of configuration parameters with an automated self-adaptive system.

View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Sign up for an information session to connect with students, advisors and faculty from across UBC and gain application advice and insight.