Subscribe to GEN
Login to GEN
This AI Operational Policy (the "Policy") sets out how GEN utilises artificial intelligence (AI) and Large Language Models (LLMs) in its operations, including software development practices. This Policy is incorporated into, and forms an adjunct to, the Framework Agreement between GEN and the Customer. Capitalised terms not defined here have the meaning given in the Framework Agreement. If there is any inconsistency between this Policy and the Framework Agreement, the Framework Agreement takes precedence.
GEN has been pioneering intelligent systems since 1990, long before "AI" became a mainstream buzzword. Our approach to artificial intelligence is grounded in decades of practical experience and a commitment to accuracy over speed. We are heavily invested in AI technologies and have developed our own sophisticated AI platform, GAIN (GEN's Artificial Intelligence Network), which represents our philosophy of precision-focused, domain-specific AI solutions.
For more information about our AI capabilities and Large Language Model services, please refer to our LLM Services page.
AI and LLM technologies are outstanding in specific areas where they demonstrably add value to our operations and service delivery. GEN deploys AI in the following operational contexts:
GEN maintains a clear and deliberate policy regarding the use of AI in software development. Our development team comprises skilled professionals with extensive expertise in their respective programming languages, frameworks, and domains. All code produced by GEN is professionally written, thoroughly documented, and subject to rigorous quality assurance processes.
AI tools are utilised to support, but not replace, our development processes in the following areas:
To understand our position on AI-generated code, it is essential to understand what Large Language Models actually are and how they function. Despite the marketing terminology, LLMs are not "intelligent" in any meaningful sense. They are sophisticated statistical prediction engines that determine the most probable next word (or token) based on patterns observed in their training data. When an LLM produces output, it is not reasoning, understanding, or thinking; it is performing mathematical calculations to predict what text is most likely to follow, based on billions of examples it has processed.
This fundamental characteristic makes LLMs remarkably capable in certain domains, such as text analysis, pattern matching, translation, and summarisation, where statistical probability aligns well with useful output. However, it also means they have no genuine comprehension of the content they produce. An LLM does not "understand" code any more than a calculator "understands" arithmetic.
GEN does not use AI to write production code. This is a deliberate, informed decision based on our understanding of how these tools actually function.
When an AI "writes" code, it is not engineering a solution. It is statistically regurgitating fragments of code from its training corpus, which consists primarily of publicly available code from internet repositories, forums, tutorials, and open-source projects of wildly varying quality. The AI patches these fragments together based on what patterns appear most frequently in similar contexts. The result may compile and even appear to function, but it lacks the architectural coherence, contextual appropriateness, and considered design that professional software engineering demands.
Those unfamiliar with software development are often impressed when AI produces working code from a simple prompt. However, professional developers consistently observe that AI-generated code exhibits characteristic flaws: inefficient algorithms, poor error handling, security vulnerabilities, inconsistent naming conventions, unnecessary complexity, and a fundamental lack of understanding of the broader system context. The code may "work" in a narrow sense whilst being entirely unsuitable for production deployment, maintenance, or integration with existing systems.
More critically, AI cannot understand business requirements, regulatory constraints, security implications, or the specific operational context in which code will execute. It cannot make the judgement calls that experienced developers make instinctively: when to prioritise performance over readability, how to structure code for future extensibility, or what edge cases are relevant to a particular domain. These decisions require genuine understanding, not statistical prediction.
We utilise integrated development environment (IDE) features such as IntelliSense and autocomplete functionality. These tools provide suggestions based on language syntax, available methods, and contextual hints. However, it is essential to understand that these are suggestions only. Every suggestion is evaluated by the developer, who makes the final determination on whether to accept, modify, or reject it based on their professional judgement and understanding of the codebase.
Our developers are skilled professionals with genuine expertise in their respective languages, frameworks, and domains. They write code; they do not delegate that responsibility to statistical prediction engines. This approach ensures that all code delivered by GEN reflects human expertise, considered design decisions, and professional accountability. When a GEN developer writes code, they understand what it does, why it does it that way, and how it fits into the broader system. These are assurances that AI-generated code simply cannot provide.
All AI-assisted processes at GEN operate under human oversight. Automated outputs, whether from code review assistance, documentation generation, or test analysis, are reviewed and validated by qualified personnel before being incorporated into deliverables. This ensures that AI serves as a tool to enhance human capability rather than replace human judgement.
We do not make solely automated decisions with significant effects without appropriate human oversight. Accountability for all work product, whether AI-assisted or not, remains with the responsible GEN personnel and ultimately with GEN as an organisation.
Any processing of personal data in connection with AI operations is carried out in accordance with GEN's Privacy Notice and applicable data protection law, including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018.
Where AI tools are used, we implement appropriate safeguards to protect data confidentiality and integrity. GEN utilises locally hosted, proprietary AI models where appropriate and does not transmit Customer data to external AI providers for processing or training without explicit agreement. Data minimisation principles are applied, and personal data is not retained longer than necessary for the specific processing purpose.
AI systems operated by GEN are subject to the same security controls and confidentiality obligations as all other GEN systems and services. Access to AI tools and their outputs is restricted to authorised personnel, and all activities are logged for audit purposes. The confidentiality provisions of the Framework Agreement apply to all AI-related processing.
AI-assisted outputs are subject to the same quality standards as all other GEN deliverables. Our quality assurance processes do not distinguish between AI-assisted and purely manual work; all outputs must meet our published standards regardless of how they were produced. This ensures consistent quality and reliability across all services.
We continuously evaluate and refine our AI tools and processes to improve accuracy, efficiency, and reliability. Feedback from development teams, quality assurance processes, and Customer outcomes informs ongoing improvements to our AI capabilities. We maintain awareness of developments in AI technology and adopt new capabilities where they demonstrably add value.
This Policy operates in conjunction with the following GEN policies and agreements:
GEN may amend this Policy from time to time as AI technologies and our use cases evolve. Changes take effect prospectively from the stated effective date or, if none, from the date of posting.
This Policy is incorporated into the Framework Agreement and does not operate as a standalone agreement. If there is any inconsistency between this Policy and the Framework Agreement, the Framework Agreement takes precedence.
Questions regarding this Policy or GEN's use of AI should be directed via the HelpDesk at https://support.gen.uk. We welcome enquiries about our AI capabilities and are happy to discuss how our AI services might support your requirements.