ANSIRA AI GOVERNANCE POLICY
Version: 1.6
Date: October 14, 2025
Review Cycle: Annual (November)
Responsible: AI, Legal, Information Security
1. INTRODUCTION. This document establishes Ansira’s AI Governance & Data Privacy Policy, providing guidelines for the responsible use of Artificial Intelligence (AI), Generative AI, and Machine Learning (ML) across our organization and in service to our clients. This policy ensures that AI is implemented securely, ethically, and in alignment with our commitment to protecting client data and maintaining standards of compliance. Ansira is committed to providing secure and responsible AI services tailored to the needs of our clients. Whether deploying AI internally to enhance employee productivity or integrating AI into our platform and client services, we maintain governance frameworks that prioritize data protection and ethical practices. As AI technology evolves, Ansira will update our AI capabilities, training programs, and policies. Employees and clients can expect communication about changes to AI governance practices and new AI-enabled capabilities.
2. OBJECTIVE. Ansira is committed to leveraging AI to deliver value to our clients and employees while maintaining governance, security, and ethical standards. This policy governs two key areas:
2.1. Internal AI Enablement empowers employees with approved AI tools to enhance productivity and streamline workflows while ensuring compliance with security and privacy requirements.
2.2. AI in Products & Services governs the integration of AI into Ansira’s SaaS platform and client services to improve outcomes while maintaining data protection, ethical AI principles, and regulatory compliance.
3. CORE PRINCIPLES.
3.1 Innovation with Accountability: AI enhances our capabilities when implemented within proper governance frameworks. All AI usage is monitored and measured against security and compliance standards.
3.2. Client Data Protection First: Client data is our highest priority. We implement strict controls to ensure client data remains secure, private, and is never used inappropriately or shared without explicit written consent.
3.3. Best-in-Class Tools: Ansira uses enterprise-grade, compliant AI tools that meet rigorous security standards. While specific tools may evolve, our governance framework remains constant.
3.4. Ethical & Transparent AI: We are committed to fair, unbiased, and transparent AI practices. All AI systems undergo evaluation for potential risks, and we provide explanations of how AI is used in our services.
3.5. Continuous Education: All employees receive ongoing training on responsible AI use, data protection, bias mitigation, and ethical practices. We believe informed employees are essential to responsible AI adoption.
3.6. Regular Evaluation: AI tools and practices are reviewed quarterly by our Technology Steering Committee to ensure they remain effective, secure, and aligned with business objectives and emerging best practices.
4. APPROVED AI TOOLS. Employees may only use AI tools that have been formally approved by Ansira’s Technology Steering Committee and designated as “sanctioned.” A current list of sanctioned AI tools is maintained on the company intranet and updated as approvals are Employees must access sanctioned AI tools exclusively through company-provided accounts and licenses, or company-approved API integrations managed by our IT department. Prohibited actions include using personal AI tool accounts for any work-related activities, submitting any company or client data to non-sanctioned AI tools, and sharing company AI tool credentials with external parties.
5. CLIENT DATA PROTECTION & MODEL ISOLATION.
5.1. Dedicated Client Models: Each client is provided with a dedicated, isolated AI model designed exclusively for their requirements and aligned with client instructions. Custom training and updates are performed only with the client’s data. No cross-use of data occurs between clients.
5.2. Data Isolation: Each client’s data is logically isolated within our infrastructure and used exclusively for their AI model. Our systems implement strict technical and logical controls to ensure complete separation, preventing any data mixing, cross-training, or leakage between clients.
5.3. No Unauthorized Training: Client data is never used to train general-purpose AI models, improve third-party AI systems, or for any purpose outside the agreed service scope without explicit, documented written permission from the client. Client data is never shared, sold, or used without explicit written permission.
5.4. Data Minimization: AI systems only access the minimum data necessary to perform their intended function. Employees are trained to avoid submitting unnecessary sensitive information to AI tools. Only the data required for the agreed AI use case is processed.
5.5. Retention & Deletion: AI-processed client data follows the same retention and deletion policies as all other client data. Upon client request or contract termination, all client data is removed from AI systems according to our data retention policy. Data is deleted or returned at project completion or upon client request.
6. SECURITY & COMPLIANCE STANDARDS. All AI usage at Ansira must comply with SOC 2 Type II compliance requirements, CCPA (California Consumer Privacy Act) and GDPR (General Data Protection Regulation) standards, internal information security policies and access controls, and contractual obligations specific to each client engagement.
6.1. AI Model Risk Management: All AI models undergo formal risk assessments before deployment, evaluating security vulnerabilities and data protection measures, regulatory compliance across applicable jurisdictions, potential bias, fairness issues, and accuracy, and robustness and reliability under various conditions.
6.2. Hosting & Infrastructure: Our infrastructure is hosted on trusted enterprise providers Microsoft Azure and Google Cloud Platform, both maintaining SOC 2 Type II and SOC 3 security certifications. Ansira conducts regular reviews and assessments to ensure compliance, resilience, and continuous improvement.
6.3. AI Providers & Encryption: Trusted AI providers including Microsoft Azure, Google Vertex, OpenAI, Anthropic, and Huggingface are used for defined use cases such as text and image generation, summarization, and transcription under strict encryption and privacy safeguards. All data is encrypted in transit and at rest. These providers do not collect or use client data for training or improving their models.
7. EXTERNAL INTEGRATIONS & API ACCESS. Any AI tool that connects to external systems such as CRM platforms, marketing automation tools, or client databases requires written approval from both the Technology Steering Committee and the Information Security team before implementation. Approval requirements include security assessment of the integration, documentation of data flows and access permissions, review of third-party SOC 2 compliance status, and business justification and risk assessment. Any AI-generated actions that create, modify, or delete data in Ansira applications require additional case-by-case approval from the Technology Steering Committee and must include audit logging.
8. TRAINING & AWARENESS. All employees with access to AI tools must complete initial AI Governance Policy training within 30 days of hire or tool access, annual refresher training on policy updates and responsible AI practices, and role-specific training for employees using specialized AI tools. Training covers data protection, recognizing bias, security best practices, and appropriate use cases for AI tools.
9. INTRODUCING NEW AI TECHNOLOGIES. To maintain security and compliance while enabling innovation, new AI tools must follow this approval process and most reviews are completed within 30 to 45 business days of submission:
9.1. Request Submission: Employees or teams submit a formal request to the Technology Steering Committee including business justification and expected benefits, description of the AI tool and its intended use, data types and sources that will be processed, and proposed user group and access requirements.
9.2. Security & Compliance Review: The Information Security team evaluates SOC 2 Type II certification status, data privacy and protection capabilities, compliance with CCPA, GDPR, and contractual obligations, and vendor security practices and incident response history.
9.3. Risk & Business Assessment: The Technology Steering Committee reviews alignment with strategic priorities and business objectives, cost-benefit analysis and budget considerations, technical feasibility and integration requirements, and potential risks and mitigation strategies.
9.4. Decision & Implementation: Approved tools are procured and configured according to security requirements, added to the sanctioned tools list with usage guidelines, announced to relevant employee groups with training resources, and monitored for effectiveness and compliance on an ongoing basis.
10. AI MODEL LIFECYCLE MANAGEMENT.
10.1. Ongoing Monitoring. All deployed AI models are monitored for performance and accuracy metrics, security incidents or anomalies, compliance with current regulations, and bias or fairness issues in outputs.
10.2. Model Retirement. AI models are decommissioned when they pose security risks due to outdated technology, are superseded by more effective or efficient models, no longer comply with current regulatory requirements, or are no longer aligned with business needs. The retirement process includes formal decommissioning plan reviewed by Technology Steering Committee, all associated data archived or securely deleted per retention policy, access credentials revoked and systems disabled, and documentation maintained for audit purposes.
11. GOVERNANCE STRUCTURE & ACCOUNTABILITY.
11.1. Technology Steering Committee oversees AI strategy, approves new tools, and ensures alignment with business objectives. Meets monthly to review AI initiatives and quarterly to assess overall AI governance.
11.2. Information Security Team conducts security assessments, monitors compliance, and manages incident response related to AI tools.
11.3. Department Leaders are responsible for ensuring their teams follow this policy and complete required training.
11.4. Individual Employees are accountable for using AI tools responsibly, protecting client data, and reporting any suspected policy violations or security concerns.
12. INCIDENT RESPONSE & POLICY VIOLATIONS.
12.1. Reporting: Employees who observe or suspect AI policy violations or security incidents must immediately report them to their manager and the Information Security team via [email protected].
12.2. Investigation: All reported incidents are investigated promptly. Appropriate corrective actions are taken based on the severity and nature of the violation.
12.3. Consequences: Policy violations may result in disciplinary action up to and including termination of employment, depending on the severity and intent of the violation.
13. CLIENT TRANSPARENCY & RIGHTS.
13.1. Disclosure: Clients are informed about how AI is used in their services and have the right to request information about which AI tools process their data, explanations of AI-driven recommendations or decisions, opt-out options for specific AI features where feasible, and copies of AI processing records as part of standard data access requests.
13.2. Audit Rights: Clients may request evidence of our AI governance compliance, including relevant portions of security assessments and SOC 2 reports, subject to confidentiality requirements.
13.3. Ethical Standards & Continuous Evaluation: Ansira is committed to fair, transparent, and bias-aware AI usage. Clients and employees are informed about limitations, risks, and best practices for ethical AI use. AI tools, security practices, and governance frameworks are continuously updated to meet evolving standards.
14. POLICY REVIEW & UPDATES. This policy is reviewed annually (every November) and updated as needed to reflect changes in AI technology and capabilities, new regulatory requirements, lessons learned from incidents or audits, and evolving industry best practices. Employees and clients will be notified of material policy changes within 30 days of approval.
15. CONTACT & QUESTIONS.
Policy Interpretation: AI Governance Team at [email protected]
Security Concerns: Information Security Team at [email protected]
New Tool Requests: Technology Steering Committee at [email protected]
Privacy Questions: [email protected]
Technical Support: [email protected]
16. REVISION HISTORY
Version | Date | Description of Change |
---|---|---|
1.0 | March 11, 2025 | Original internal governance policy |
1.1 | August 13, 2025 | Minor formatting changes |
1.2 | August 15, 2025 | Accepted edits, minor language changes, removed specific tooling names |
1.3 | August 15, 2025 | Proposed changes to simplify and enhance flow |
1.4 | September 30, 2025 | Added policy review cycle and process |
1.5 | October 2025 | Merged internal governance policy with client-facing data privacy commitments into single document |
1.6 | October 14, 2025 | Updated list of cloud providers and AI model providers. |