The Civic AI Observatory (civicai.uk) is a new initiative from Nesta and Newspeak House to support civic organisations as they plan and adapt to the rapidly evolving field of generative artificial intelligence. Resources, case studies, events, online spaces, and more.
Chat with us & other readers about this issue: join our whatsapp group
Join us in person at our first Civic AI Unconference! 20th October, 2-5pm, Nesta HQ
Doing anything with generative AI? Seen something useful? Tell us about it: hello@civicai.uk or just hit reply!
Your organisation needs some new policies
In this issue we’ll be looking at organisational policies: even if your organisation does nothing at all with generative AI, you will still need them. The widespread accessibility of primary interfaces like ChatGPT, as well as the emerging landscape of generative AI tools, have the potential to affect many different parts of your organisation, with both opportunities and risks. Responding to this will likely require a more general organisation-wide strategy than what you’ve had up to now.
Here are all the different kinds of organisational policies that we’ve seen:
1/ Internal policies for staff use
First up - people in your organisation will already be using it, so you’ll want a policy on employee use. This will bear a passing resemblance to a “use of social media” policy. Individuals and teams across organisations are already actively experimenting with tools such as ChatGPT, and it’s imperative to acknowledge this and introduce appropriate structures to steer such experimentation in the right direction.
This should cover both opportunities and risks. Employees may benefit from training or allocated time to share and explore AI tools together. Depending on your organisation there may also be pressing issues related to security, privacy, reputational risk, or wider ethical concerns. In general, the policies we have seen have been quite balanced, both recognising the possibilities of the technology as well as warning against the dangers.
Here are a selection which may be useful in forming your own policies:
The official guidance for how civil servants should use this technology from the UK government
A sample policy that considers the ethical guideline for use in campaigning from a smaller organisation
One-Pager for Staff from London’s Office of Technology
A sample policy from the Society for Innovation, Technology and Modernisation
Let us know if you have one already, or are planning to make one: hello@civicai.uk
2/ Internal policies for procuring generative AI products
It is also important to think about having a policy for any tools that you’re buying. We expect that many agencies and startups will be selling new AI products so it’s helpful to know how to evaluate them and work out which ones make sense for your organisation.
Here are the best resources we have found so far:
Guidelines for Procurement of AI Solutions from the World Economic Forum
Commentary on US public sector procurement from Centre for Democracy & Technology of AI
A snapshot of AI procurement challenges in government from GovLab
3/ Public statements about the organisation’s use of generative AI
Your customers, members, donors, or other stakeholders may be asking questions about your approach to AI. For many orgs, just publishing the internal policies may be enough. However, in certain sectors such as journalism, AI poses added reputational risks so they are particularly keen to reassure their audiences. Here are samples from various news orgs we have seen:
Despite a few negative stories, public perception of generative AI use in civic contexts has been surprisingly positive. Some recent research from the Centre for Data Ethics and Innovation tells us that most people are open to the use of foundation models within the public sector, especially in low-risk use cases, provided that there’s human review of outputs and accountability over decisions.
If you have seen or done any other research like this, even anecdotal, let us know: hello@civicai.uk
4/ Internal policies for using generative AI in digital services
There’s no doubt that generative AI will be used to improve digital services and enable new kinds of user experience, but this is very new and how it will look is not yet clear. Working this out will need both technical knowledge of what the models can do and detailed internal knowledge of the organisation and service. Digital product teams can start thinking about simple applications, but model capabilities are still improving quickly and effective design patterns will emerge as the ecosystem matures and tooling improves.
We will be tracking this in future issues, but in the meantime, if you are planning a project please tell us: hello@civicai.uk
If your organisation has already been using standard machine learning then perhaps you already have some kind of responsible AI policy, but in light of generative models, you may want to update it. There will be many new possible applications that you may choose to avoid or pursue, and you may want to review your approach to disclosure, testing, data privacy, oversight, and so on. “Non-generative AI” policies are a relatively mature area now - this paper gathers and compares many examples - but we haven’t yet seen many examples that account for generative AI specifically:
Generative AI: 5 Guidelines for Responsible Development from Salesforce
Seven Principles for ‘Responsible’ Generative AI by the UK’s Competition & Market Authority
Note that we’re not talking about using generative AI to help with software development, although there is an entire revolution going on in this area with tools like GitHub Copilot (these should also be covered in your individual use policy!)
5/ Policies for written submissions
Anyone who receives lots of text-heavy documents - for example, grant applications, tenders, job applications, even student essays - will have noticed that their lives have changed in 2023: many more submissions, often of dubious quality. There’s a whole subfield in using AI to evaluate applications - maybe adding bias, maybe removing it - but in the meantime, you might want to provide guidelines for your applicants as to how or if they should use AI to produce their submissions. We’ve only found one example so far, from The Research Funders Policy Group:
As for education, there’s lots to say here and we’ll likely come back to this in a future issue, but the speed of change is fascinating and might inspire ideas for the emerging problem of written submissions more generally:
Statement on the use of Gen AI in education from the Dept. of Education
Principles on the use of AI in education from the Russell Group
Literature review on ChatGPT in education
Teachers & Professors discussing the pros and cons of ChatGPT on Reddit
A few other interesting things on the governance of generative AI
Great research from Harvard Law School surveying governance of AI in corporate organisations
Microsoft offers various kinds of governance support for organisations using their AI products
credo.ai is a tool for tracking whether AI systems deployed in an org comply with policies. Perhaps you could do this with a spreadsheet, but it’s interesting to see that there’s now a dedicated tool for this nevertheless
Wikipedia policy for editors on LLM use
Next steps
For the next newsletter, we’ll be thinking about common enterprise tools that are getting generative integrations. Have any of the tools that you use every day suddenly got some magic new buttons? Are any of them useful? Tell us! hello@civicai.uk
Chat with us & other readers about this issue: join our whatsapp group
Join us in person at our first Civic AI Unconference! 20th October, 2-5pm, Nesta HQ
Hi. I've just learned about this community by a podcast. Are there any other newsletters or resources I've missed?
Superb. Full of useful answers to questions I didn’t yet know existed. Thank you.