PARTICIPATE
  • Home
  • About Us
  • Publications
  • Updates
  • Chatbot
  • Take Action
    • If you live in Australia
    • What to do
    • Get Help
    • Platform policies
    • The Laws in Australia
    • If you live in another country
    • What to do
    • Get Help
    • Platform policies
  • Contact Us
  • Seeking research participants
  • Home
  • About Us
  • Publications
  • Updates
  • Chatbot
  • Take Action
    • If you live in Australia
    • What to do
    • Get Help
    • Platform policies
    • The Laws in Australia
    • If you live in another country
    • What to do
    • Get Help
    • Platform policies
  • Contact Us
  • Seeking research participants
Launch of Umibot – 1 December 2022

Launch of Umibot – 1 December 2022

  • Posted by IBA Project
  • On 29 November 2022

RMIT Media Release | Words by Shu Shu Zheng

New chatbot goes online to fight image-based abuse

Experts in abusive online behavior say their world-first AI chatbot will help people report incidents of
image-based abuse and find support.

Image-based abuse – when someone takes, shares or threatens to share nude, semi-nude or sexual
images or video without consent – has become a growing issue, experienced by 1 in 3 Australians
surveyed in 2019.

Lead researcher behind the creation of ‘Umibot’, Professor Nicola Henry from RMIT University’s Social
and Global Studies Centre, said ‘deepfake’ content (fake videos or images generated using AI), incidents
where people are pressured into creating sexual content and being sent unsolicited sexual images or
videos also count as image-based abuse.

“It’s a huge violation of trust that’s designed to shame, punish or humiliate. It’s often a way for
perpetrators to exert power and control over others,” said Henry, who is an Australian Research Council
Future Fellow.

“A lot of victim-survivors we talked to just want the issue to go away and the content to be taken down
or removed but often they don’t know where to go for help.”

That is what this pilot chatbot is here to address.

The idea came to Henry after conducting interviews with victim-survivors about their experiences of
image-based abuse.

While the people she spoke to had diverse experiences, Henry said they often did not know where to go
for help and some did not know that what had happened to them was a crime.

“The victim-survivors we interviewed said they were often blamed by friends, family members and
others and made to feel ashamed, which made them even more reluctant to seek help,” said Henry.

Dr Alice Witt, an RMIT Research Fellow working on the project with Henry, said Umibot is not a
replacement for human support, but it is designed to help people navigate complex pathways and
provide them with options for reporting and tips on collecting evidence or how to keep safe online.

“It is not just for victim-survivors,” said Witt.

“Umibot is designed to also help bystanders and even perpetrators as a potential tool to prevent this
abuse from happening.”

How does Umibot work?

Users can type questions for Umibot, or they can select answers from a set of options.

Umibot also asks users to identify whether they are over or under 18 and if they need help for
themselves, help for someone else, or are concerned about something they have done. This will inform
what sort of support and information they get to suit their experiences.

Henry says Umibot is the first of its kind that is dedicated to victim-survivors of image-based abuse.

“There are other chatbots out there that more broadly help people who’ve experienced different online
harms, but they are not focused on image-based abuse and they don’t have the same hybrid
functionality that allows users to type questions to the chatbot,” said Henry.

A new approach to chatbot design

Created with the support of an Australian Research Council Future Fellowship grant, Henry and Witt
worked with Melbourne-based digital agency Tundra to create Umibot using Amazon Lex, an artificial
intelligence service for building natural language chatbots.

“We know victim-survivors of image-based abuse face a spectrum of experiences over and above image-based abuse, so we developed Umibot as a fully inclusive and trauma-informed empowerment tool to
support people who have diverse experiences and come from different backgrounds,” Henry said.

The team also worked with a diverse range of consultants and did an independent accessibility audit to
make sure Umibot was as compliant as possible with global accessibility standards for people with
disabilities.

“Our main ethical challenge was to make sure Umibot didn’t cause any harm or trauma, or make the
user feel burdened,” said Witt.

“A lot of victim-survivors are not ready to talk to a person about their experiences, so teaching Umibot
how to be empathetic and helpful is a way for them to seek support without any pressure.”

Next steps for Umibot

With Umibot available to use right now, the researchers are hoping to develop a Umibot Version 2 for
victim-survivors, bystanders and perpetrators of image-based abuse in the next few years.

“We hope that Umibot will not only empower victim-survivors to find support, but also help us create
‘best practice’ guidelines for designing, developing and deploying digital tools and interventions for
addressing online harms more broadly,” said Witt.

Recent Posts
  • Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all.
  • Launch of Umibot – 1 December 2022

Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all.

Next thumb
Scroll

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business.

RMIT is committed to providing a safe and inclusive environment, free from discrimination

Terms & Conditions

Privacy

ABOUT
The project is funded by an Australian Research Council (ARC) Future Fellowship. We’re a small team of interdisciplinary feminist researchers from the Social and Global Studies Centre at RMIT University in Melbourne, Australia. Our vision is to translate research into practical outcomes and benefits, including more robust and effective legal, policy, educational and corporate responses to image-based abuse. Our feminist ethics of care approach is victim-survivor-centric, trauma-aware and intersectionality-focused.
Recent Posts
  • Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all.
  • Launch of Umibot – 1 December 2022
CONTACT
Send us a message via our contact page

Or email us at: imagebasedabuse@rmit.edu.au


twitter icon
@2022 THE IMAGE-BASED ABUSE PROJECT