Conclusions. Policy mechanisms, challenges and recommendations in urban AI
![Monografia CIDOB nº 89](/sites/default/files/styles/max_width_150/public/2025-02/portada%20ingl%C3%A9s_6.jpg.webp?itok=IJobC3VV)
Local governments worldwide are increasingly adopting algorithmic systems to improve the delivery of public services. However, growing evidence indicates that these systems can cause unintended harms and demonstrate a lack of transparency in their implementation. As a result, the adoption of algorithmic systems has often been accompanied by the development of guiding principles for the responsible use of AI technologies, primarily at the national, supranational or global levels. Notable examples include the OECD AI Principles (2019), the G20 AI principles (2019), the Council of Europe AI Convention drafting group (2022-2024), the Global Partnership on Artificial Intelligence Ministerial Declaration (2022), the G7 Ministers’ Statement (2023), the Bletchley Declaration (2023), the Seoul Ministerial Declaration (2024), the EU AI Act (2024) or the UN Report “Governing AI for Humanity” (2024). Yet these frameworks generally provide only broad guidance on what constitutes responsible AI use, offering little practical direction on how these principles should be applied in real world contexts.