The ethics of automated classification: a case study using a dignity lens

The SmartyGrants Innovation Lab evaluated how a dignity lens analytic tool released in 2021 by the Centre for Public Impact might be applied as an ethics framework to the CLASSIEfier algorithm. This white paper demonstrates how the tool was applied retrospectively to help the SmartyGrants Innovation Lab audit each decision made in CLASSIEfier’s development.


SmartyGrants launched the text auto-classification system CLASSIEfier in 2021 to classify grantmaking records on behalf of grantmakers and other social sector supporters, to track the flow of money in Australia by sector, location and beneficiary. The algorithm became a pilot for ethical considerations in artificial intelligence (AI) systems.

When building the algorithm we faced several ethical considerations, such as:

  • the correct way to handle grant data without breaching confidentiality and data privacy
  • the degree of model accuracy that is acceptable
  • how to overcome human, data and algorithm bias
  • how involved should data experts and data owners be

Since the launch of CLASSIEfier, the Innovation Lab has taken several steps to improve the performance of the algorithm, including facilitating transparency, explainability, interpretability, stakeholder engagement, testing and incorporation of feedback.

This white paper frames these decisions in terms of their impact on dignity, using the dignity lens analytic tool developed by Lorenn Ruster and Thea Snow and published in partnership with the Centre for Public Impact.

How we applied a dignity lens to CLASSIEfier

The dignity lens was developed by Lorenn Ruster, a “responsible tech” collaborator at CPI and a PhD candidate at ANU, and Thea Snow, CPI director.

They define dignity in terms of the inherent value and inherent vulnerability of individuals. They propose thinking about dignity as an ecosystem to capture the different protective and proactive roles that individuals and organisations can play in relation to dignity. A healthy dignity ecosystem is defined as an environment in which individuals and organisations are responsible for creating the conditions for dignity for all.

During the development of CLASSIEfier, the Innovation Lab made decisions through brainstorming and team consensus. The Dignity lens analytic tool has been valuable for documenting the ethical questions we faced and the resolutions we took. We found that the tool can adapt to auto-classification and artificial intelligence systems as well as other data-driven projects, such as data visualisation, insight reports, survey design and more.

Ethics of automated classification COVER

What we learned:

The dignity lens has been applied retrospectively; that is, after decisions had already been made. In the future, we expect to use it earlier in the AI development process, as a planning and design tool.


  • The case study can be used as a template for other AI systems being developed
  • The dignity lens helps us address all 10 essential elements of dignity
  • The dignity lens enables us to give adequate consideration to all stages of AI development
  • The dignity lens provides a language for discussion and debate
  • The dignity lens gives us confidence and a way of documenting our decisions so we can continually improve

Sign-up to our newsletter