Responsible AI Policy
This Responsible AI Policy sets clear guidelines for how NMX Global Software, Inc. uses artificial intelligence (AI) in external client projects. We aim to leverage AI technologies (such as code-generating assistants, machine-learning models, and content-generative tools) in ways that are ethical, lawful, and secure. All AI-driven activities must comply with this policy, with each project team reviewing and enforcing these rules at every stage.
Scope and Definitions
- Scope: This policy applies to all client-facing software development services involving AI (including data analytics, machine learning, generative content, or AI-driven automation). It does not govern internal research or product R&D.
- Artificial Intelligence (AI): For this policy, “AI” includes machine-learning models, large language models (LLMs), generative AI tools (e.g. ChatGPT, DALL·E, GitHub Copilot), and any system that makes automated predictions or content.
- Generative AI: A subset of AI that creates new content (text, code, images, etc.) from learned patterns.
- Human Oversight: A qualified person reviews and can override AI outputs or decisions.
Permitted AI-Related Work
We encourage using AI to improve efficiency and creativity when performed responsibly. The following AI tasks are permitted under controlled conditions, subject to compliance with client requirements and this policy:
- Code Generation & Assistance: Using AI coding assistants (e.g. Cursor, Windsurf, GitHub Copilot, ChatGPT’s coding features) to generate, refactor, or document code, provided that: (a) the content is checked by developers for correctness, security, and licensing; (b) no proprietary or sensitive code/data is sent to public AI services without proper anonymization; and (c) all outputs are treated as untrusted until reviewed by a human.
- Data Analysis & Machine Learning: Processing and analyzing client data with AI/ML models (for analytics, prediction, classification, etc.) provided that data is collected and used lawfully (e.g. with client consent or within contractual scope) and protected by data-security controls. Training and inference on models is allowed only with data that is either non-sensitive or explicitly approved by the client, and in compliance with privacy laws.
- Image and Content Generation: Creating graphics, documentation, prototypes, or marketing content using generative AI tools with conditions: all AI-generated assets must be reviewed for accuracy, ethical considerations, and licensing. We verify that any training or output content does not infringe copyrights or confidentiality.
- Automation of Routine Tasks: Automating benign, repetitive tasks (such as unit-test generation, formatting, transcription) using AI tools is allowed, provided a human monitors the results and sensitive information is not exposed.
- Customer-Facing AI (Chatbots, Voice Assistants): Implementing AI-powered chatbots or assistants for client use is allowed if the system’s outputs are supervised, and it is clear to end-users that responses are AI-generated.
Each permitted activity must follow our AI Project Checklist: confirm legal basis for data use (e.g. GDPR consent), apply input/output review, document the AI system’s purpose and limitations, and ensure human oversight of any critical outputs. These measures help make our AI solutions trustworthy and align with industry frameworks for AI risk management.
Prohibited AI-Related Work
The following uses of AI are strictly prohibited:
- Unauthorized Use of Client Data: No client-proprietary, personal, or confidential data may be used to train or fine-tune external AI models (including public services) unless the client has given explicit permission and proper data agreements are in place. In particular, employees must never input private data or source code into public AI interfaces in raw form.
- Fully Automated Decisions in High-Risk Domains: We do not allow AI to make critical decisions without human review. Any automated process that produces legal, medical, financial, or safety-related outcomes must have a qualified human in the loop. For example, AI-generated loan approvals, diagnostic recommendations, or legal advice cannot be released without explicit professional oversight.
- Licensed Professional Advice: AI must not be used to give unreviewed personalized advice in regulated fields (legal, medical, financial, etc.). AI can assist professionals (e.g. by summarizing information) only if a qualified expert ultimately reviews the output.
- Privacy and Ethical Violations: We forbid using AI to violate privacy or generate harmful content. This includes producing hate speech, harassment, explicit sexual content, extremist material, or instructions for illegal activities. AI systems must not infer or reveal protected attributes (race, health, religion, etc.) about individuals. They must never be used for non-consensual surveillance or collecting biometric identifiers. In short, all AI outputs must respect human rights, comply with privacy laws (GDPR, CCPA, etc.).
- Intellectual Property (IP) Violations: We must avoid infringing others’ IP. AI-generated content that resembles copyrighted material is disallowed unless properly licensed. For instance, using copyrighted text or images without permission for training or generation is forbidden. As industry lawyers note, the ownership of AI-created works is legally uncertain, so we treat any such output cautiously and clarify rights with clients.
- Misrepresentation: We will not present AI outputs as the unedited, original work of a human. Any AI-generated report, design, or code must be properly labeled if required, and clients will be informed of AI involvement in the process.
In summary, if an AI use might break laws, violate regulations, or contradict a client’s rules, it is prohibited. When in doubt, project leads must consult Legal or Compliance before proceeding.
Employee Use of Public AI Tools
Employees may use public AI and generative tools (e.g. ChatGPT, Claude, GitHub Copilot) only under controlled conditions:
- Non-Sensitive Tasks Only: Use public AI tools for non-confidential work: e.g. getting generic coding examples, brainstorming, or natural language queries about common problems. Do not enter any proprietary code, client data, personal information, or trade secrets into public tools. Studies have found employees often leak confidential data into ChatGPT, which can then be incorporated into the model. We strictly forbid that.
- Tool Restrictions: Use of public tools is disallowed if the client prohibits it in their project. For internal projects, employees should preferentially use approved AI services or on-premises solutions with appropriate privacy guarantees.
- Output Review: Any code or content generated by AI must be thoroughly reviewed by the user. AI suggestions are not final; developers remain fully responsible for validating correctness, security, and compliance. Mistakes or vulnerabilities introduced by blindly accepting AI output must be caught and fixed by human review.
- No Bypass of Policies: Accessing any AI tool in violation of our corporate security policy (for example by bypassing network controls or using unauthorized browser extensions) is forbidden. All AI tool usage should comply with IT and information-security guidelines.
- Training and Awareness: We provide training so that employees understand the limitations and risks of AI tools. Employees must stay informed about AI privacy (e.g. knowing that generic ChatGPT logs prompts, while enterprise models may offer data control) and comply with each service’s terms.
By following these rules, we harness AI tools for productivity without compromising client trust or data security.
Client-Specific AI Usage and Customization
Clients have the right to set their own AI-related constraints. We will honor client instructions regarding AI:
- Customization and Documentation: During project scoping and contracting, we will document any client preferences or restrictions on AI usage. For example, a client may require “no generative AI tools are to be used for this project” or may specify an approved list of tools. These requirements become part of the Statement of Work or contract.
- Contract Clauses: We can include explicit contractual language to enforce AI rules. For instance, we might use a clause such as “The contractor will not utilize any generative artificial intelligence systems except in accordance with Company policy and Client’s directions.”. This ensures that client-specific AI permissions or bans are legally binding.
- Enforcement: Project managers are responsible for enforcing client AI policies day-to-day. If a client prohibits AI on their codebase, all team members must comply and no AI assistance will be used. Any requested deviation (for example, using an AI code review tool) must be re-approved by the client.
- Transparency: We will be transparent with clients about our AI practices. If we use any AI in their project (e.g. for testing or prototyping), we will inform the client in advance and obtain approval, documenting that usage in design docs or reports.
Clients trust us to follow their directions. By explicitly capturing these rights and obligations in our agreements and project plans, we ensure all AI work aligns with each client’s comfort level and regulatory constraints.
Compliance with Laws and Industry Standards
We comply with all applicable laws, regulations, and industry standards governing AI and data. Key references include:
- Data Protection and Privacy Laws: We adhere to GDPR and similar data-protection regulations (e.g. GDPR, CCPA). AI projects involving personal data will follow data-minimization, purpose-limitation, and transparency principles. Specifically, we observe GDPR Article 22 by ensuring that any automated decision affecting individuals is accompanied by human review and safeguards. When handling EU/UK personal data, we rely on lawful bases (consent, contract, etc.) and may conduct Data Protection Impact Assessments for high-risk AI uses.
- Information Security (ISO/IEC 27001): We follow ISO 27001 principles to ensure confidentiality, integrity, and availability of data used in AI systems.
- NIST AI Risk Management Framework: We align with the NIST AI Risk Management Framework (AI RMF), which advocates incorporating trustworthiness, risk assessment, and governance into the AI lifecycle. This means identifying potential harms (bias, safety, privacy, etc.) in each project and mitigating them (via testing, audits, controls). We use NIST’s guidance for documentation and accountability of AI risk decisions.
- Client and Sector-Specific Regulations: When working in regulated industries (e.g. finance, healthcare, government), we also ensure AI use meets sector rules (HIPAA for health data, PSD2 for finance, etc.). If a project’s regulatory context imposes additional AI constraints (such as FDA rules for medical devices or FINRA rules for financial advice), we integrate those into our practice.
By referencing these standards and frameworks, we demonstrate our commitment to high-quality, accountable AI. All AI work will be documented in accordance with our Quality Management System, and we will maintain records (logs, test results, reviews) to prove compliance during audits.
Implementation, Review, and Enforcement
- Training: All relevant employees receive training on this AI Policy. We will also provide project-specific briefings as needed when new AI tools or regulations emerge.
- Incident Reporting: Any suspected violation of this policy or unexpected AI-related incident must be reported immediately to the project manager and the Compliance team. We will investigate and take corrective action (from re-training staff to pausing AI tools) depending on the severity.
- Audits and Reviews: We will periodically audit AI projects for compliance (e.g. by code review). This policy itself will be reviewed at least annually or whenever major AI legislation or technology changes occur.
- Disciplinary Measures: Non-compliance with this policy may result in disciplinary action, up to termination, especially if it risks legal exposure or client trust.
In adopting and enforcing this Responsible AI Policy, our company ensures we harness the benefits of AI while managing its risks. We provide innovative AI-powered solutions with the human judgment, transparency, and data protections that our clients expect.