BCS MEMBERSHIP
Join the global and diverse home for digital, technical and IT professionals.
CHARTERED IT PROFESSIONAL
CITP is the independent standard of competence and professionalism in the technology industry.
STARTING YOUR IT CAREER
Kick-start a career in IT, whether you're starting out or looking for a career change.
SOFTWARE TESTING CERTIFICATION
Over 100,000 professionals worldwide are certified with BCS.
ESSENTIAL DIGITAL SKILLS
Improve your digital skills so you can get on in today's workplace.
EVENTS CALENDAR
View all of our upcoming events.
ARTICLES, OPINION AND RESEARCH
The latest insights, ideas and perspectives.
IN FOCUS
BCS MEMBERSHIP
CHARTERED IT PROFESSIONAL
STARTING YOUR IT CAREER
SOFTWARE TESTING CERTIFICATION
ESSENTIAL DIGITAL SKILLS
EVENTS CALENDAR
ARTICLES, OPINION AND RESEARCH
IN FOCUS
02 December 2022
BCS has responded to a call for evidence from the Parliamentary Science and Technology Select Committee on the governance of Artificial Intelligence , and submitted its response on 25th November 2022.
In September 2021 the UK government published the National AI Strategy. Included in the strategy was an action to consult on proposals to regulate the use of AI under certain circumstances. Those proposals are published here: ‘Establishing a pro-innovation approach to regulating AI proposals’. The BCS response to the consultation based on input from our professional membership can be found here. The key focus of the Science and Technology Select Committee is examining the government’s proposals to regulate AI. Hence, the BCS evidence to the Committee is derived from our response to the government’s consultation on those proposals. The following headings are taken directly from the consultation, with our response.
It is important to appreciate that Artificial Intelligence (AI) is still a set of nascent technologies. There are organisations in all UK regions struggling to build management and technical capability to successfully adopt AI.
Good governance of technology that impacts on people’s lives, whether that is AI or some other digital technology, leads to high standards of ethical practice and high levels of public trust in the way the technology is used.
Our evidence strongly indicates there is not a uniformly high level of ethical practice across information technology in general, and there is a low level of public trust in the use of algorithms (including AI) to make decisions about people.
Given governance of AI is often integrated within existing governance structures around data and digital, our evidence indicates there is not a uniformly high standard of AI governance across the UK.
Our most recent survey of ethical standards in information technology had responses from almost four and a half thousand BCS members (see Table 1 for details) working in all sectors of the economy and at all levels of seniority.
When asked to assess the general standard of ethical practice in the organisations they worked for:
Table 1: Results from survey of over 4,400 BCS members in 2018
Whilst it is encouraging that ethical standards were perceived to be high by almost a third of those responding, the responses also show a high level of variability overall, where a quarter of members reported low ethical standards.
Table 2: Survey results to question – which, if any, of the following organisations do you trust to use algorithms to make decisions about you personally
For you
Be part of something bigger, join the Chartered Institute for IT.
In 2020 BCS commissioned YouGov to survey a representative sample of 2,000 members of the public across the UK on trust in algorithms.
The headline result from the survey was that:
The survey question in full was: ‘Which, if any, of the following organisations do you trust to use algorithms to make decisions about you personally’ where the range of options to choose from is shown in Table 2 [NB – bold emphasis in the table was added only after the survey results were analysed].
For the use of AI to be more transparent and explainable organisational governance should:
Properly reviewing AI decisions requires governance structures to follow the principles in Section 1. Decisions involving AI can then be properly reviewed as aspects of the governance structure. That is, the review of decisions made by an AI system, whether in the public or private sector, should focus on assuring that the proper governance structures are in place and governance processes are followed. That will mean decisions are based on data that is standards compliant and enables effective use of digital analysis/auditing tools and techniques to validate decisions.
Previous BCS studies highlighted that the use of an AI system should trigger alarm bells from a governance perspective when it is:
We call an AI system problematic when it has the above attributes.
Problematic AI systems describe a significant class of systems that would be very challenging to scrutinise or review decisions made by such systems. The overarching issue should be to prevent problematic AI systems being used to make decisions about people in the first place, which is best done by ensuring the governance principles described in Section 1 are always followed.
The BCS view is that regulation should allow organisations as much freedom and autonomy as possible to innovate, provided those organisations can demonstrate they are ethical, competent, and accountable when measured against standards that are relevant to the area of innovation. Pro-innovation regulation should enable effective knowledge transfer, the sustainable deployment of new technologies, as well as stimulate organisations to embrace innovative thinking as core to their strategic vision and values, as illustrated in Figure 1.
4 days ago
5 days ago
11 days ago
Leave a Reply
You must be logged in to post a comment.