As a recruiter it is probable that you are already using AI, but are you using it responsibly with regard to those criteria that apply to the recruiting sector?
While there is no overarching employment regulator for general employment matters in the UK, the Department for Science, Innovation and Technology has released guidance aligned with the national principle based framework for AI that the UK has adopted.
Separately, the EU AI Act, enacted in July 2024, mandates strict practices. The Act classifies teh use of AI into risk categories. AI used in Recruitment is in the high risk category by default with maximum penalties being EUR 35M or 7% of global turnover whichever is higher. This EU Act protects all EU citizens, no matter where they are resident.
Given the rapidly changing landscape of regulation, we encourage you to evaluate your likely risk level by answering the five questions below. When complete you will receive general remediation guidance by way of response.
To pursue a full risk review or to explore remediations tailored to your organisational requirements then please do contact us on ai.info@bigspark.dev. Bigspark’s classic review, delivered by bigspark’s AI advisory, uses bigspark’s 12 step program starting at £30,000 + VAT. We are always ready to customise our assessment to your needs. The outcome of the workshop will be to deliver a clearly stated gap analysis based on your current AI policy and future stakeholder needs. As a data science and engineering specialist consulting service, we will deliver a remediation recommendation for any medium or high risk items identified for the 12 fundamentals. In addition, given that our advisors include those trained in law, we are aware of recent changes in the regulatory landscape.
Allow bigspark’s AI advisory experts to de-mystify the AI risk landscape. No technical knowledge is required of your participants and all relevant stakeholders from all disciplines are welcome.
AI adoption is accelerating. AI provides an exciting platform for growth, productivity and innovation and yet a failure to apply best practices strategically and proactively might result in technical debt or even existential threat.
- The Financial Conduct Authority (FCA) who regulate the UK Financial Sector is taking a proactive approach to ensuring the safe and responsible adoption of AI. A light touch approach is provided as a starting point, but is subject to revision.
- On the 5th June 2024, Nikhil Rathi, Chief executive of the UK’s Financial Conduct Authority, announced that there is no immediate plan for the FCA to introduce rules on the use of AI.
- On the 12th June at the AI summit Sholtana Begum – FCA panelist – made clear that this is because the FCA believes that Responsible AI practices are not be governed in isolation, but rather, good governance should be woven into the fabric of business best practices.
- The FCA will for now continue to rely on the governance provisions in the SMCR, Consumer Duty and market integrity framework
Place careful emphasis on five core principles : Explainability, safety and a duty of care, fairness, accountability and redress.
What should the financial sector plan to do?
First, it is important to act. Familiarise yourself with AI by learning and by ideating value propositions and use cases.
Second, determine your ethics and apply your policies. Integrate AI adoption with your existing policies and controls aligned with Enterprise architecture.
Most importantly, Bigspark’s guidance is to be clear about your organisation’s intention, risk appetite, and to incorporate policy now.
To simplify your use of AI and Data for good in the financial sector, bigspark has provided a free Risk assessment. See below.
Our new AI advisors are waiting to take your call. ai.info@bigspark.dev