Although use of Artificial Intelligence (AI) is increasingly common, it has created problems for ethics, privacy and human rights. As society and universities focus more on AI and generative AI, UBC graduate students are researching its applications, using AI as a research tool and improving AI systems themselves.

  • Farhan Samir uses a custom AI system to track how online information varies between languages.
  • Oludolapo Makinde finds where and how Canadian companies can use AI to address internal corruption.
  • Saiyue Lyu works on increasing the security of AI systems without compromising their accuracy.

Each student explores how AI can have an impact — learn more about their research below. 

Farhan Samir, Recent PhD Graduate in Linguistics and Postdoctoral Fellow at the University of Toronto 

English-language Wikipedia has become a common source for information, but the site’s content can vary widely across languages. Recent PhD graduate Farhan Samir is using an AI tool to find these differences and combat myths about information online.  

For example, Russian pages for members of the LGBT community contain a higher proportion of negative facts than their corresponding English pages. While the same negative facts appeared on the English pages, Samir found that non-LGBT people’s Russian pages did not have as high a proportion.

Samir and his team developed and used the InfoGap method for this research. The method combines two AI models to identify variation between Wikipedia’s language editions. One AI model first retrieves facts on a given page and finds the most likely match for each in another language’s version of that page. A second model then assesses the match and determines if both facts express the same idea.

To help the public stay aware of the discrepancies in information, Samir is using InfoGap to develop a browser extension that gives Wikipedia readers additional context. The extension will tell users when another language edition’s version of the page they’re reading has additional information.

“What we're trying to do is point out that there are really big disparities between language editions,” said Samir. “The public should be more aware of the fact that on Wikipedia, you're not getting objective, encyclopedic information.”

Through his research, Samir hopes to inform people that information on Wikipedia and elsewhere online comes from a small set of the population with a certain position. English Wikipedia’s editors tend to be young, white, adult males, and this limited demographic affects what information makes it on the site. Older methods of disseminating information, such as newspapers, had clearer power structures that are less obvious on the web.

“We have entrenched power structures that limit the contribution of information, and the web has concealed that,” said Samir.

With the current InfoGap research and the upcoming browser extension, Samir hopes to demonstrate that information on Wikipedia is not objective but shaped by the power and position of its authors.

Oludolapo Makinde, PhD Candidate in Law

Canadian companies may not have a reputation for engaging in corruption, but they can face this problem internally when doing business overseas. PhD candidate Oludolapo Makinde is researching AI-based anti-corruption measures for Canadian companies, particularly those working in the Global South.  

In anti-corruption work, AI can be useful for preparing reports, comparing policies and improving employee engagement with preventative training. It is especially useful for sorting through data to find suspicious transactions much faster than a human.

However, AI is unregulated in many countries, including Canada, and data protection laws are often lax. This creates a vulnerability in the Global South, where companies want more region-specific data, and legislation is not strong enough to protect people's privacy.

“There’s nothing wrong with collecting data to help build AI systems, but it should be done appropriately,” said Makinde. “You ensure you get their consent, and you work according to their practices and their expectations.”

Data collection is not the only concern with using AI. Systems detecting corruption could be manipulated by humans using them, and running AI requires data and cooling centres that need a lot of water. Makinde often hears worries that companies will use a lot of water without adequately compensating often marginalized communities for that resource.  

Here, Makinde emphasizes incorporating perspectives from the Global South in her research, all while acknowledging governance and corruption’s effects on environmental efforts.

“Governance is an essential element of sustainability,” said Makinde.

Makinde's research provides a set of best practices for companies who want to tackle corruption. She says that assessing how human rights are affected and having a policy for using AI responsibly can guide employees building AI anti-corruption systems. However, few companies have comprehensive policies, reports and training.  

“One of the complaints companies have come up with is they don’t have sufficient information,” said Makinde. “I’m just trying to provide that information, or that resource, to them to think about how they can act on their own.”

Saiyue Lyu, PhD Student in Computer Science

AI is everywhere, but it can be tricked or attacked. In response, PhD student Saiyue Lyu is developing techniques to make AI systems safer and more reliable.  

Randomized Smoothing is a technique used in AI systems to increase their robustness. Systems using it first add randomness, or noise, to multiple versions of an input. Then, the system can make a prediction about the input based on an average of the randomized versions.

Adversarial attacks on AI are serious and difficult to tackle, because they can have a major effect on systems’ accuracy that are not visible to human users. But when an attack tries to disturb predictions by adding noise, or additional data, to distract the system, Randomized Smoothing can handle the extra noise and still make predictions correctly on the average of randomized inputs.

Adding randomness can also protect privacy. Lyu’s work connects Randomized Smoothing and Differential Privacy, a notion of privacy where randomness hides data on an individual but not groups of anonymized individuals.  

Though Randomized Smoothing helps AI systems work in the face of attacks and maintain privacy, using this technique can decrease the accuracy of these systems because it adds noise.  

“We're trying to find the optimal balance, where we can add the right amount of randomness to protect data or have some robustness without significantly compromising on the accuracy,” said Lyu. “So, we're trying to play with this trade off.”

Lyu’s and her team’s version, Adaptive Randomized Smoothing, instead finds randomness within the input data to use in the randomizing and averaging process. With this new method, AI systems are still robust against attacks while generating accurate predictions.

As AI systems become more common, Lyu ultimately hopes to protect these systems and the privacy of their users through her research.

“Long run, I would hope my research contributes to building trustworthy AI systems by making these systems more transparent, robust and privacy-preserving,” said Lyu. “I think it will lead to AI being more responsible in all of the real-world applications.”

For further reading:

 

Guest post by Marie Erikson, fifth-year Bachelor of Arts, Philosophy Honours student 

Release Date