Trustworthy as a feature, not a bug

This article is the second in a series exploring the role of trustworthy AI in mainstreaming sustainable investing.

Three key questions

In the first article in this series, I set out the three key questions for trustworthy AI as an prerequisite for sustainable investing. First, what would an end user need to see to find the AI trustworthy? Second, is that trustworthiness enough for this high-risk use case? Third, how can these levels be maintained over time?

In this article I want to drill down on one of the use cases mentioned, to examine how exactly we can answer these three questions in the real world. Ask_Cameron is Uswitch for sustainable pensions [full disclosure: a Maxwell Data product]. Cameron is a conversational AI that builds your sustainability profile, assigns you to a "tribe" of like-minded pensions members, and suggests funds that best match your profile. You can see and use Cameron here.

Each of the three key questions is pertinent to Cameron's use case:

  1. As an untrusted AI approach in a historically low trust market targeted on dis-engaged users, Cameron needs to radiate high levels of trustworthiness to even be opened.

  2. While 79% of retail investors consider sustainability important, lack of transparency and data reliability act as major barriers to confidence in sustainable investing. This has led to regulators introducing product labels, anti-greenwashing rules, and mandatory disclosures.

  3. Maintaining trust over time is complicated by a fast-evolving market and slower evolving personal preferences. For example, an issue like underage workers in the supply chain of fast fashion that was only emerging ten years ago may now be an industry hygiene factor.

So how did the Cameron set about building trustworthiness as a feature?

Trusting like a human

To start with, we wanted to think about trust they way real people think about it. In other words, not as feature to be engineered according to the framework like IBM's. Last week I saw this described perfectly in a post by George Hesmondhalgh talking about a similarly distrusted sector:

I went to the City of London Annual Water Debate last week and something struck me. There was lots of talk about the need for public trust, but what was missing was how, as a sector, we earn it back.... I came to the conclusion that organisations have to do the same as individuals when they want to be trusted, focus on being trustworthy. And that comes in three parts: do the job well (competent), tell the truth when things go wrong (honest), and do both of those things over and over, not just when people are watching (reliable).

This is exactly right. Trustworthiness is the quality of being deserving of trust. There will be a 20% water shortfall by 2050. Closing that gap requires a combined effort by the industry, government and users to change behaviours. But that can't happen unless the industry shows itself to be a genuinely trustworthy player.

Similarly in the workplace pensions industry, there is a huge appetite for sustainable pensions. 79% of Gen Z employees want their workplace pension to be sustainable [source: Scottish Widows]. Yet this won't happen unless the industry shows itself to be a genuinely trustworthy player.

Cameron was built to implement the three parts of trustworthiness - competent, honest and reliable. This model is originally set out in Fisher, Justin and Heerde, Jennifer and Tucker, Andrew. 2010. ‘Does One Trust Judgement Fit All? Linking Theory and Empirics’. The British Journal of Politics and International Relations.

1. Does what it says on the tin

In the first instance, a sustainable preferences engine like Cameron needs to generate preferences. It needs to do what it says on the tin, it needs to be competent. Cameron uses a four-dimensional materiality framework that quantifies the user's sustainability preferences from the text across importance, sentiment, urgency, and conviction dimensions.

  • Importance is the relative significance the stakeholder assigns to this sustainability factor in investment decision-making. Marked on a 1 - 5 scale.

  • Sentiment quantifies the stakeholder’s attitude toward companies addressing the issue. Marked on a -1 to +1 scale.

  • Urgency captures the temporal dimension of stakeholder’s preferences, i.e., how quickly they expect corporate action on this issue. Marked on 1 - 3 scale.

  • Conviction assesses the strength and certainty of the stakeholder’s position on this driver. Marked on a 1 - 3 scale.

Importantly, the user can review the results in the app and amend if they do not agree with the output.

2. Do I care?

In the second instance, a sustainable preferences engine like Cameron needs to understand where the user is coming from. In other words, is it honest? Everyone has their own understanding of 'sustainability'. For some, it's only about the environment whereas for others it's also about social and governance issues. Cameron employs a multi-agent architecture which enables modular, interpretable decision-making where each agent’s contribution to the dialogue flow remains fully traceable.

  • Input Validation to distinguish whether there is valid engagement with ESG topics or not. Importantly, user apathy toward specific sustainability factors constitutes meaningful preference information rather than invalid input.

  • Clarification Agent receives ESG driver definitions from a curated knowledge base and reformulates them in accessible language while maintaining conceptual accuracy. The agent does not pose follow-up questions, instead providing concise explanatory statements (maximum two sentences) that enable stakeholders to engage with the original question after receiving necessary context

  • Refocusing Agent addresses partial responses where stakeholders fail to address all aspects of multidimensional questions. Upon activation, the agent acknowledges addressed topics while redirecting attention to remaining dimensions, maintaining conversational flow without generating entirely new questions.

  • Question Generation Agent is responsible for advancing the conversation through the predetermined sequence of ESG topics. This agent formulates questions that balance adherence to structured interview protocols with natural conversational progression.

3. Does it work over time?

In the third instance, a sustainable preferences engine like Cameron needs to work over time. In other words, is it reliable? This is where time is an essential variable, both in during the conversation and afterwards.

  • At completion, Cameron runs a retrospective analysis on the entire transcript as a single document, computing materiality scores for all drivers independently. This holistic assessment provides validation of findings and reveals potential discrepancies between real-time tracking and comprehensive evaluation.

  • Cameron is designed to encourage repeated and frequent interactions because both the market and user's preferences evolve over time. The analysis provides the user's 'tribe' of people like them. This is a well-known builder of trust - people like me - but also allows for gamification and tailored communications.

For more information on how Cameron works, see Winter, Benjamin and Castagna, Federico and Baker, Ross and Tucker, Andrew. March 2026. Cameron: An Interpretable Conversational AI Framework for Multi-Dimensional Sustainability Preference Assessment.

Does it work?

We evaluated Cameron through a mixed-methods study with 100 pension scheme members. Results demonstrate 93% question relevance and 64% natural language quality ratings. In terms of building trustworthiness of Cameron, respondents reported 88% positive trustworthiness at the end of the engagement, 29% reported increased trustworthiness, and 91% reported they would trust Cameron for sustainable pension investing decisions.

Two other interesting findings from the evaluation should be mentioned. 68% of participants did not notice distinct agent behaviours, revealing a tension between architectural transparency and user-perceived explainability. The evaluation also identified that 90-91% of participants had no prior experience with systematic sustainability preference elicitation, highlighting a significant gap in current pension fund practice.

Cameron treats building trustworthiness as a feature. It is designed to answer the three key questions for trustworthy AI as an prerequisite for sustainable investing. First, what would an end user need to see to find the AI trustworthy? Second, is that trustworthiness enough for this high-risk use case? Third, how can these levels be maintained over time? The initial evaluation results are strong but it is the results in the field that will prove the use case. Cameron is in trials now with several UK-based master pension trusts. We will report on these trial results in due course.

Next
Next

The key to sustainable investing lies in trustworthy AI