Thank you for participating in our assessment!

Your score: 0 / 0

The questions included in this questionnaire assess a sample of 12 key criteria that bigspark has determined are needed to meet fundamental regulatory requirements and best practices. Your answers today indicate that you are incurring significant unnecessary risk and liability within your organisation by not applying essential Responsible AI management steps. Please do feel free to contact bigspark if you need any support in rationalising your approach. We are here to support you.

YOUR RESPONSE

RECOMMENDATION

Explainability

Question: The black box nature of AI systems can make them difficult to explain. Recruiters are always accountable for ensuring that the decisions and recommendations made to their clients are transparent and explainable. If you are using AI are you confident that you can explain exactly the decisions you're making both to regulators, and to your customers?

Your answer:

An inability to describe and document the logic used in decision making adds unnecessary risk.   Please review the options provided to see if any might help you manage effective improvement

  1. Ensuring that the use of AI is transparent to your clients
  2. Self auditing and documenting AI. A technical workshop
  3. TCO workshop. Knowing and explaining outcomes and costs
  4. Target operating model workshop

Safety

Question: Every Recruitment agency and recruiter needs to find the right balance between leveraging AI effectively, assuring safety and maintaining a personal touch if to maintain client trust. Are you confident that you have a clear framework in place that enables you to do this when using AI??

Your answer:

Balancing automation with expertise and oversight is critical to introducing AI into your work of building trusted relationships with clients. We suggest considering the following options:

  1. Advisory workshop on Accountability and Governance
  2. Advisory workshop on Target operating model, policy refinement and design
  3. Introducing Human oversight and controls
  4. Chatbot service tiering for customer service
  5. Reviewing escalation, contestability and redress

 


Fairness

Question: AI algorithms can perpetuate and exascerbate biases present in historical data potentially leading to discriminatory practices and liability. Recruiters need to constantly monitor and address issues as they arise. Are you confident that you are not discriminating now and that you are appropriately monitoring all AI over their lifecycles?

Your answer:

An inability to assure fairness in your AI not only creates unnecessary risk and liability but genuinely impacts the career prospects of those who you serve in potentially negative ways. Take a moment to consider the following remediation pathways.

  1. Use diverse and representative training data to ensure that it includes a wide range of population samples and demographics in support of critical recruitment scenarios
  2. Use human oversight checkpoints in your ML Ops and prompt review pipelines
  3. Introduce methods of detecting data drift early in continuous integration, pipeline and audit routines
  4. Use AI to stay up to date with compliance requirements

Accountability

Question: AI policy and governance must be defined and continuously managed. Upskilling and adapting the workforce capabilities might be required. Are you confident that you have a policy in place, accountabilities are clear, and that your workforce is appropriately supported?

Your answer:

Organisational accountabilities may be unclear because you do not have clear policy and governance, or that training and communication practices mean that the message is failing to reach your extended organisational requirements. Consider the following improvements

  1. Advisory workshop on accountability and governance
  2. Advisory on Target Operating Model, policy refinement and design
  3. AI Inventory Accountability – Best practice and Gap analysis
  4. Advisory led AI Workforce Responsible AI Basic Training
  5. Multi-discipline Advisory – Change and Transition management

Contestability and redress

Question: Recruiting agencies need to be transparent in their use of AI. Recruiting involves collecting and analysing a lot of personal and financial data. Robust data protection and compliance with regulation is needed to protect sensitive data will be required. Do you make it clear to your customers and clients that AI is being used and then provide an ability to contestability and redress AI methodologies in your business?

Your answer:

Implementing a contestability and redress process that can be applied within your organisation is a critical step in realising responsibility goals. Introducing a method  will substantially reduce liability. Consider pursuing one or more of the following suggestions.

  1. Introduce a contestability and redress process if you do not have one. As an example bigspark can provide a simple plug in solution suitable for. use by small and medium businesses seeking to be compliant at reasonable cost
  2. Run a multi-discipline advisory workshop on AI complaints management to create a gap analysis
  3. Validate you know who is receiving your decisions and review outcomes regularly as a part of your human oversight practice
  4. Create a ‘Model and Prompt’ technical review board to provide oversight of actual and desired outcomes. Broaden stakeholder awareness of the methods being applied to ensure decisions are valid

The questions included in this questionnaire assess a sample of 12 key criteria that bigspark has determined are needed to meet fundamental regulatory requirements and best practices. Your answers today indicate that you are incurring significant unnecessary risk and liability within your organisation by not applying essential Responsible AI management steps. Please do feel free to contact bigspark if you need any support in rationalising your approach. We are here to support you.

YOUR RESPONSE

RECOMMENDATION

Explainability

Question: The black box nature of AI systems can make them difficult to explain. Brokers are always accountable for ensuring that the decisions and recommendations made to their clients to ensure they are transparent and explainable. If you are using AI are you confident that you can explain exactly why the decisions are taken both to regulators, and to your customers.

Your answer:

An inability to describe and document the logic used in decision making adds unnecessary risk. Please review the options provided to see if any might help you manage effective improvement.

  • Deployment of ‘transparency logo’ to declare ‘AI inside’
  • Self auditing and documenting AI: A technical workshop of best practice
  • TCO workshop: Knowing and explaining outcomes and cost
  • Creation of an organisational wide target operating model
  • Use of third party governance solutions

Safety and Robustness

Question: Every Mortgage Brokerage needs to find the right balance between leveraging AI effectively and maintaining personal touch if to maintain client trust. Are you confident that you have a clear framework in place that enables you to do this?

Your answer:

Balancing automation with expertise and oversight is critical to introducing AI into your work of building trusted relationships with clients. We suggest the following options:

  • Advisory Workshop on Accountability and Governance
  • Advisory on Target Operating Model, policy refinement and design
  • Introducing Human Oversight Controls
  • Chatbot service tiering for Customer Service
  • Reviewing escatlation, contestability and redress procedures

Fairness

Question: AI algorithms can perpetuate and exascerbate biases present in historical data potentially leading to discriminatory practices and liability. Mortgage Brokers need to constantly monitor and address issues as they arise. Are you confident that you are not discriminating now and that you are appropriately monitoring all AI over their lifecycles?

Your answer:

An inability to assure fairness in your AI not only creates unnecessary risk and liability but genuinely impacts the career prospects of those who you serve in potentially negative ways. Take a moment to consider the following remediation pathways.

  • Use diverse and representative training data ensuring data includes a wide range of borrower demostraphisc and scnearios
  • Audit and Test models and prompts regularly
  • Maintain human oversight checkpoints in your MLOps and prompt review pipelines
  • Introduce methods of detecting data drift early in continuous integration, pipeline and audit routines
  • Use AI to stay up to date with compliance requirements

Accountability

Question: AI policy and govermamce practices must be defined and continuously managed.Upskilling and adapting the workforce capabilities might be required. Are you confident that you have a policy in place, accountabilities are clear, and that your workforce is appropriately supported?

Your answer:

Organisational accountabilities may be unclear because you do not have clear policy and governance, or that training and communication practices mean that the message is failing to reach your extended organisational requirements. Consider the following improvements:

  • Advisory Workshop on Accountability and Governance
  • Advisory on Target Operating Model, policy refinement and design
  • AI Inventory – Best practice and Gap analysis – What to inventory and how
  • Advisory led AI Workforce Responsible AI Basic Training Multi-discipline
  • Advisory – Transition management

Contestability and redress

Question: Mortgage Brokers need to be transparent in their use of AI. Using AI involves collecting and analysing a lot of personal and financial data. Robust data protection and compliance with regulation to protect sensitive data will be required. You MUST make it clear whether you are using AI or not, for what purpose, and you must provide contestability and redress methodologies on your site and in your business processes

Your answer:

Implementing a contestability and redress process that can be applied within your organisation is a critical step in realising responsibility goals. Introducing a method will substantially reduce liability. Consider pursiong one or more of the following suggestions:

  • Multi-discipline Advisory workshop on Data sourcing and accuracy review – Advisory led, Engineering and Design multi discipline workstreams
  • Multi-discipline Advisory Workshop on Policy and Target Operating model workshop
  • Complaint investigation – AI Review and gap analysis
  • Lineage mapping – Technical review and gap analysis
  • Model & prompt technical reviews and gap analysis

YOUR RESPONSE

RECOMMENDATION

Safety

Question: Do you have measures in place to ensure that your AI system for lending and credit decisions is resilient to adversarial attacks and operates reliably under various conditions, including atypical or high-stress scenarios?

Your answer:

Hosted Al security blueprints LLM Shield Engineering stress test PII Detector and Sanitizer


Explainability

Question: Can you provide clear and understandable explanations to customers regarding how your AI system determines creditworthiness and lending decisions?

Your answer:

AI Risk Assessment & Intention Workshop Engineering Suite Audit/Review Engineering Documentation Review Model Interpretability & Explanation Assesment Model Interpretability Framework Adaptation(LIME & SHAP)


Fairness

Question: Have you implemented measures to ensure that your AI system does not exhibit biases or discrimination against any specific groups, particularly in terms of gender, race, or socioeconomic status, when making lending decisions?

Your answer:

Al Governance Review Workshop UX. Experience Review Hosted Model Monitoring, Interpretability and Governance Tools Human Evaluation Platform integration(RLHF) Data Model Inventory & Review


Accountability

Question: Do you have governance structures and accountability mechanisms in place to oversee the development, deployment, and continuous monitoring of your AI system used for lending and credit?

Your answer:


Redress

Question: Do you have processes available for customers to contest or seek redress for decisions made by your AI system, and are these processes communicated to and accessible by your customers?

Your answer:

Al Redress process and policy workshop AI Labs Showcase – Explanability Interface AI Labs Showcase – User Feedback Integration System AI Labs – Packages & Chatbot Download