The Civic AI Observatory (civicai.uk) is a new initiative from Nesta and Newspeak House to support organisations working for the public good as they plan and adapt to the rapidly evolving field of generative artificial intelligence. Resources, case studies, events, online spaces, and more.
Join our whatsapp community to chat with us & other readers:
#CivicAI: Capabilities (to discuss this issue)
#CivicAI: Safe Bets (to discuss issue 3)
#CivicAI: Productivity (to discuss issue 2)
#CivicAI: Organisation Policies (to discuss issue 1)
Join us at these upcoming events:
20 June: AI in Grantmaking
24 June: Trends & Opportunities in US Political Campaign Tech
2 July: AI for Government Unconference
16 July: Civic AI Unconference
Doing anything with generative AI? Seen something useful? Tell us about it: hello@civicai.uk
Great Expectations
There are now major new AI language models being released every few weeks. They're gradually getting better, both in "intelligence" (AKA "capability"), and also in "multimodality" (the ability to work with multimedia inputs and outputs). A dozen or more companies are releasing these models, and there's no particular model that significantly surpasses the others; at the moment, choosing between them is an engineering decision. Open source models are more or less keeping pace with proprietary models, largely thanks to Meta's Llama series of open-source models. There are also now many smaller models being developed specifically to run locally on phone hardware.
Investment and attention in this sector is being driven by the belief that model capabilities will continue to improve, leading to significant new applications appearing in the medium term (6-18 months), such as reliable autonomous agents. However, there's no guarantee that the current rate can be sustained – creating them is becoming extremely expensive, all easily available training data has been used, and further improvement will only come either through breakthroughs in model architecture, or innovations in synthetic data for model training.
Language models have been put to various applications, which broadly can be characterised as text extraction, summarisation, classification, translation, editing, drafting, ideation, and software development. The models still make mistakes, and so most organisations are choosing to either manually supervise the outputs, or apply them in settings where perfect accuracy isn’t required, such as statistical corpus analysis. An example of the latter can be seen in this case study where AI was used to analyse hundreds of thousands of lengthy Financial Ombudsman decisions.
We don’t consider these applications to have significant strategic implications for most organisations at this time. It remains prudent to wait for the product landscape to mature rather than to make investments – the software packages that are already in use in your organisation will continue to get AI-enhanced features. For example, both of the Google and Microsoft enterprise suites are rolling out their own AI customer service systems.
On the other hand, there are strategic implications from the growing risk of sudden and unpredictable changes in user behaviour. There are two big developments coming that will mean many more people are interacting with AI in everyday life. Firstly, most smartphones will soon have integrations built into the operating system. Secondly, Meta has announced that they are rolling out AI features in WhatsApp, Instagram and Facebook Messenger. It’s not clear what these will mean in practice; one possibility is that AI becomes a significant alternative for search, which may affect your organisation if it depends a lot on search results, for example if your users come directly to you for legal aid or medical advice.
Up to now we’ve only seen two unmistakable changes to the way the world works: how students write essays, and how jobseekers write applications. It’s notable that these are both “interim” activities where process automation is intentionally suppressed. This fits the pattern: so far, mature existing systems are still better than what these models are capable of on their own, and the problems arise with those who have to deal with the downstream consequences.
Observations
AI risk management has become a thriving ecosystem with hundreds of companies creating products to tackle issues with governance, compliance, quality, security, and privacy that arise when putting AI models into practice. link
A prototype tool that extracts the structure of a form from an image or PDF and generates a multi-page web form in the GOV.UK schema. link
National Audit Office report on UK government AI adoption strategy: “Our survey of government bodies found that AI was not yet widely used across government, but 70% of respondents were piloting and planning AI use cases.” link
Policy and Guidance for AI in UK government procurement: “Planning for a general increase in activity as suppliers may use AI to streamline or automate their processes and improve their bid writing capability and capacity leading to an increase in clarification questions and tender responses.” link
CAST launches AI resources hub, including communities for digital leads and grantmakers, as well as grants available for experiments. link
A collection of case studies of civic services delivered via AI-powered WhatsApp Chatbots. link
Can we protect elections from artificial intelligence? link
How generative AI chatbots respond when asked for the latest news. link
The Impact and Opportunities of Generative AI in Fact-Checking link
International Fact-Checking Network renews its grant program for leveraging WhatsApp to track or respond to AI-generated misinformation. Deadline 8 July! link
BBC Editorial Guidelines on the use of Artificial Intelligence: “Generative AI should not be used to directly create news content published or broadcast by BBC News…” link
UK Department of Education report on Generative AI in Education: “In addition to augmenting educator jobs and tasks, GenAI could also fundamentally alter how and what people learn by changing how information is synthesised and presented.” link
Contribute to a survey on the impact of AI on knowledge consumption and production processes, specifically research, cultural heritage and education. link
Joseph Rowntree Foundation report on the implications of AI for digital inclusion: “People started talking about how AI could help them, and how using AI was something they could do too.” link
Mozilla launches the AI Intersections Database, which maps key social justice and human rights impacts as well as entities working on them. link
National Cyber Security Centre assessment of the impact of AI on cybersecurity over the next two years: “All types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI, to varying degrees. AI will almost certainly make cyber attacks against the UK more impactful…” link
Digital Regulation Cooperation Forum launches AI and Digital Hub, where innovators can obtain answers to complex questions that span the remits of the CMA, FCA, ICO, and Ofcom. link
Stanford AI Index Report covers technical advancements, public perceptions, geopolitical dynamics, estimates of model training costs, analyses of the responsible AI landscape, and impact on science & medicine. link
100 Days of AI: a series of a hundred 30 minute exercises to improve your AI skills. link
Next steps
Doing anything with generative AI? Seen something useful? Tell us about it: hello@civicai.uk
Join our whatsapp community to chat with us & other readers:
#CivicAI: Capabilities (to discuss this issue)
#CivicAI: Safe Bets (to discuss issue 3)
#CivicAI: Productivity (to discuss issue 2)
#CivicAI: Organisation Policies (to discuss issue 1)
Join us at these upcoming events:
20 June: AI in Grantmaking
24 June: Trends & Opportunities in US Political Campaign Tech
2 July: AI for Government Unconference
16 July: Civic AI Unconference