Learn Copilot - Free 30-day AI productivity journey from Microsoft 09 974 2379 Remote Support Client Portal Australia site

Home / Resources / AI Policy Template

AI & Data Usage Policy Guide for SMEs

A practical framework for New Zealand businesses to develop responsible AI and data governance policies.

In today's digital landscape, responsible and ethical use of AI and data is crucial for business success and maintaining trust with customers. This guide provides a comprehensive framework for small and medium-sized enterprises in New Zealand to develop an effective AI & Data Usage Policy.

It outlines key principles of data privacy, security, and AI ethics, along with practical steps for policy implementation, training, and compliance with relevant laws and regulations. By following this guide, SMEs can harness the power of AI and data while mitigating risks and fostering innovation.

Disclaimer

This template is intended as a general resource and starting point. It reflects best practices and standard guidelines but is offered "as-is" without guarantees of completeness or suitability for specific regulatory needs. It is not a substitute for tailored legal advice. We strongly recommend consulting a qualified legal professional to review and adapt this policy to meet the unique requirements of your business.

Key Considerations

Principles for your policy

When developing your policy, prioritise these six areas.

Transparency

Maintain clear communication with customers about data collection, usage, and protection practices.

Accountability

Clearly outline roles and responsibilities for data governance and AI ethics within your organisation.

Security

Implement robust security measures to protect data from unauthorised access and breaches.

Fairness

Ensure your AI systems are free from bias and promote fair treatment for all.

Explainability

Design AI systems that are understandable to users, providing clear explanations of their functionality.

Compliance

Stay informed about relevant laws and regulations to ensure your policy reflects current legal standards.

Legal Framework

Relevant NZ laws and guidelines

Understanding these is essential for building a compliant AI & data policy.

Privacy Act 2020

The foundation of New Zealand's privacy laws, setting clear rules on how personal information is collected, used, and stored. With AI often handling large amounts of personal data, the Act ensures people's rights are protected. It includes 13 Information Privacy Principles (IPPs) that outline responsible practices, from collection through to disposal of data.

Problem it solves: Addresses the risks of data misuse and breaches that can undermine public trust. By following these principles, businesses demonstrate respect for individual privacy and strengthen trust with clients and regulators.

Unsolicited Electronic Messages Act 2007

Regulates electronic marketing, a common channel for AI-driven customer interactions. It requires businesses to obtain consent before sending marketing messages, protecting individuals from spam and unwanted marketing.

Problem it solves: Sets a clear framework for ethical communication, emphasising transparency and consent in customer relations, which is essential for long-term business and customer loyalty.

Credit Reporting Privacy Code 2021

Governs the handling of credit information, which is sensitive and requires strong protection. For AI policies, compliance with this code is critical if AI systems are processing financial or credit-related data.

Problem it solves: Reduces the risk of financial data misuse, enforces transparency, and allows individuals to access and correct their information, promoting accountability in data-driven financial decisions.

Health Information Privacy Code 1994

Provides specific guidelines on handling health data, one of the most sensitive forms of personal information. For AI applications that involve health data, strict compliance is essential to protect patient privacy and ensure data security.

Problem it solves: Ensures patient data is treated responsibly, supporting trust in health services and aligning with ethical standards in healthcare.

Government Chief Privacy Officer (GCPO) Guidelines

Offers practical advice on implementing privacy best practices within New Zealand's public sector and useful for private organisations. Particularly relevant for AI policies that require clear privacy protocols to manage potential risks.

Problem it solves: Helps businesses design privacy-protective systems, minimising the likelihood of breaches and ensuring compliance with established privacy norms.

Algorithm Charter for Aotearoa New Zealand

Established in July 2020, primarily designed for government agencies to promote transparency, accountability, and fairness in the use of algorithms. While not mandatory for private sector entities, the principles offer valuable guidance for SMEs developing AI and data usage policies.

Key commitments:

  • Transparency: Clearly explain how decisions are informed by algorithms, including plain English documentation.
  • Partnership: Incorporate a Te Ao Maori perspective consistent with the principles of the Treaty of Waitangi.
  • People: Engage with communities and groups affected by algorithmic decisions.
  • Data: Ensure data is fit for purpose, understand its limitations, and manage bias.
  • Privacy, Ethics, and Human Rights: Safeguard these by peer-reviewing algorithms and consulting experts.
  • Human Oversight: Maintain human oversight of algorithms and provide avenues for appeal.
Policy Framework

Building your policy section by section

Each section covers a specific aspect of data and AI governance. Expand each to see guidance and example text.

01 Purpose & Scope: Setting the Stage

What to Include

  • A clear and concise statement of the policy's purpose
  • Specific AI goals: define how your business utilises AI (e.g., customer service, automation, product development)
  • Data inventory: briefly describe the types of data collected (e.g., customer demographics, website analytics)
  • Who the policy applies to (employees, contractors, etc.)
  • What activities it covers (data collection, AI development, etc.)
  • Accountability: define roles and responsibilities (e.g., Data Protection Officer, AI development team)
  • A brief commitment to responsible data use and AI ethics
Example

This policy outlines [Your Company Name]'s commitment to responsible data handling and ethical AI practices in pursuit of [specific AI goals]. This includes:

  • Purpose: To define responsible data handling and ethical AI practices
  • Applicability: The policy applies to all employees, contractors, and partners
  • Activities Covered: Data collection, AI development, and related activities
  • Accountability: Roles and responsibilities are defined for the Data Protection Officer and AI development team
  • Commitment: We are committed to upholding the highest standards of data privacy and AI ethics in accordance with all relevant New Zealand legislation
02 Data Governance: The Foundation of Trust

Key Areas to Cover

  • Data Collection: Be transparent about how you collect data and why. Get consent when needed
  • Data Use: Use data for stated reasons only. Ensure only the right people have access. Respect privacy preferences
  • Data Storage & Security: Secure data using encryption, restrict access, and have breach response plans
  • Data Minimisation: Do not gather more than you need, and delete unnecessary data
  • Data Quality: Ensure accuracy. Regularly validate and update information
  • Data Subject Rights: Respect customers' rights to access, correct, and delete personal information
  • Data Mapping: Create a visual representation of data flow within the organisation
  • Transparency in Data Processing: Describe how data is processed, emphasising explicit consent
  • Data Retention Policy: Detail how long data is retained and the criteria for disposal
Example

We value your privacy. We only gather the information we need to run our business and provide you with the best possible service. This aligns with our commitment to:

  • Gathering information transparently and only with necessary consent
  • Using data strictly for stated purposes with authorised personnel access only
  • Securing data with encryption, access restrictions, and breach response plans
  • Collecting only what we need and deleting unnecessary data
  • Ensuring data accuracy through regular reviews and validation
  • Respecting rights under the Privacy Act 2020 for access, correction, and deletion
  • Maintaining a visual map of how data flows through our systems
  • Data retained only as long as necessary, after which it is securely deleted
03 Artificial Intelligence: Innovation with Integrity

Guiding Principles

  • Responsible AI: Build AI systems that are fair, unbiased, and transparent
  • Development & Deployment: Train AI models properly, verify outputs, and monitor ongoing performance. Implement robust data quality checks
  • Bias & Fairness: Actively work to eliminate bias and audit models for fairness throughout the AI lifecycle
  • Transparency & Explainability: Let people know when they are interacting with AI and how it makes decisions
  • Human Oversight: Keep humans in the loop, especially for decisions that impact people significantly
  • Risk Assessment: Conduct risk assessments identifying potential negative impacts and mitigation strategies
  • Monitoring and Auditing: Establish regular schedules for monitoring AI systems for bias, accuracy, and fairness
Example

We are excited about the potential of AI but aware of the risks. Our AI systems are designed with fairness, transparency, and accountability in mind. We train our models properly, verify outputs, monitor performance, and ensure critical decisions made by AI are overseen by a human. Our AI systems undergo regular monitoring and auditing for bias, accuracy, and fairness.

04 Legal & Regulatory Compliance

We are committed to complying with all relevant laws and regulations in Aotearoa, including the Privacy Act 2020, the Unsolicited Electronic Messages Act 2007, any industry-specific regulations, and, if applicable, international laws like the GDPR and CCPA.

International Considerations

While your business may primarily operate in New Zealand, be aware of international data protection laws like the GDPR and CCPA. These have extraterritorial reach and may apply if you handle personal data of individuals in the EU or California. Key principles include:

  • Data Subject Rights: Enhanced rights for individuals regarding their personal data, including access, rectification, erasure, and data portability
  • Data Protection by Design: Implementing data protection measures from the outset of any project
  • Accountability: Demonstrating compliance with data protection principles
Example

We comply with the Privacy Act 2020, the Unsolicited Electronic Messages Act 2007, and the Algorithm Charter for Aotearoa New Zealand, among others. Regular legal review helps us stay updated and ensures that our AI practices align with current legislation, maintaining a high standard of ethical conduct.

05 Training & Awareness: Empowering Your Team

What to Do

  • Provide regular training on data privacy, AI ethics, and this policy (at least annually)
  • Offer diverse training methods (e.g., online courses, workshops)
  • Ensure everyone understands and follows the guidelines
  • Highlight employee accountability — everyone is responsible for understanding the policy
  • Maintain records of training activities
Example

We invest in our people. Everyone at [Your Company Name] receives regular training on data privacy, AI ethics, and this policy. Training is conducted annually using a range of methods including online courses and in-person workshops. We maintain records of training activities to track participation and compliance.

06 Enforcement & Monitoring

Key Actions

  • Assign Responsibility: Designate a person or team responsible for enforcing the policy
  • Regular Monitoring: Establish a schedule for monitoring compliance, including audits of data practices, reviews of AI system performance, and analysis of breach reports
  • Audit Procedures: Outline specific steps including data sampling, interviews, and documentation review
  • Non-Compliance Consequences: Clearly state the consequences including disciplinary action, retraining, or other measures
  • Version Control: Maintain a record of all policy versions, dates, and a summary of changes
Example

We take this policy seriously. [Designated Person/Department] is responsible for enforcement. We conduct regular audits using specific procedures including data sampling, interviews, and documentation review. Non-compliance may result in disciplinary action, retraining, or other measures as appropriate.

07 Reporting Concerns: Encouraging Openness

Create a Culture of Openness

  • Clear Reporting Process: Establish an accessible process for reporting concerns about data privacy or AI ethics
  • Designated Contact: Provide contact information for the responsible person or team
  • Confidentiality: Ensure concerns are treated confidentially and reporters are protected from retaliation
  • Whistleblower Protection: Explicitly protect individuals who report concerns in good faith
  • Anonymous Reporting: Consider providing a mechanism for anonymous reporting
  • Prompt Investigation: Ensure all reported concerns are investigated promptly and thoroughly
Example

We encourage an open and honest environment. If you have concerns about data privacy or AI ethics, please contact [Designated Person/Department] using our established reporting channels. We treat all concerns confidentially and individuals who report in good faith are explicitly protected from retaliation. You can also choose to report concerns anonymously.

08 Data Breach Response Plan

What to Include

  • Incident Identification: Define what constitutes a data breach (unauthorised access, disclosure, modification, or loss)
  • Immediate Actions: Isolate affected systems, change passwords, contact incident response personnel
  • Containment: Isolate affected systems and prevent further unauthorised access
  • Assessment: Assess severity including types of data involved, number of individuals affected, and potential impact
  • Notification: Define who to notify — affected individuals, the Privacy Commissioner, CERT NZ — and specify timeframes
  • Communication Plan: Develop templates for notifications and FAQs
  • Recovery: Restore systems and data, implement security patches, review protocols
  • Post-Breach Analysis: Identify root cause, learn from the incident, and improve security measures
Example Case Study

XYZ Company, a small online retailer, experienced a data breach affecting approximately 1,000 customers. Compromised data included names, email addresses, and purchase history. They followed their response plan:

  • Immediately isolated the affected server and changed all relevant passwords
  • Worked with their incident response team to contain the breach
  • Notified affected individuals, the Privacy Commissioner, and CERT NZ within the required timeframe
  • Restored systems from backups and implemented additional security measures
  • Conducted a forensic investigation to identify root cause and strengthen defences
09 Third-Party Data Sharing

Applies to all external vendors, partners, and service providers offering AI, data analytics, or related services.

Vendor Selection Criteria

  • Reputation and Compliance: Vendors should demonstrate ethical, compliant practices, verified through certifications (e.g., ISO/IEC 27001)
  • Standards Adherence: Must meet all data security, privacy, and regulatory standards, including the NZ Privacy Act 2020
  • Transparency: Required to disclose AI methodologies and data handling processes
  • Data Protection: Must employ best practices including encryption and controlled access

Monitoring and Compliance

  • Initial Evaluation: All potential vendors undergo a compliance assessment
  • Ongoing Monitoring: Vendor practices are reviewed biannually
  • Audit Rights: Your organisation may audit vendors' AI practices and data handling as needed
  • Incident Reporting: Vendors must notify within 24 hours of any data breach or significant operational change

Termination Conditions

  • Non-Compliance: Vendors failing to meet compliance standards within 60 days face contract termination
  • Security Breaches: Contracts may be terminated for vendors involved in data security breaches or unethical AI practices
Due Diligence Checklist
  • Does the vendor have a documented information security policy?
  • What security certifications does the vendor hold (e.g., ISO 27001)?
  • Does the vendor comply with relevant data privacy laws?
  • How does the vendor address bias in AI systems?
  • Is transparency maintained in AI decision-making?
  • What are their incident response procedures?
10 AI Explainability

Focus On

  • Understandable AI: Strive for AI systems that are easy for users to understand, using explainable AI techniques (e.g., decision trees, rule-based systems, LIME)
  • Decision-Making Transparency: Offer insights into how AI arrives at conclusions, including the factors considered and data used
  • Building Trust: Explainability fosters trust and accountability by helping users understand how AI impacts their lives
  • User Feedback: Encourage users to provide feedback on explainability and use this to improve transparency
  • Continuous Improvement: Continuously improve explainability as AI systems evolve
Example

We believe in making our AI systems as transparent as possible. We use explainable AI techniques to make our systems understandable and provide clear explanations of how they work. We offer insights into how AI arrives at its conclusions and encourage users to provide feedback on AI explainability to enhance transparency.

11 Emerging Technologies

Key Considerations

  • Blockchain: Consider implications for data immutability, transparency, and decentralisation. Ensure compliance with data privacy regulations
  • Edge Computing: Address data security and privacy concerns specific to edge environments when processing data on local devices
  • Synthetic Data: If used for AI training, ensure it protects individual privacy and does not perpetuate biases
Example: Privacy in Edge Computing

A healthcare company uses edge computing for real-time diagnostics on mobile devices. To address privacy they implement data minimisation (only essential data transferred), encryption in transit and at rest, strong authentication, and data stored only as long as necessary before secure deletion.

12 Policy Review and Update Process

Key Actions

  • Frequency: Review at least annually or when new technologies are adopted, regulations change, or incidents occur
  • Stakeholders: Involve legal counsel, data protection officer, IT department, AI development team, and business unit representatives
  • Approval Process: Define a clear approval process ensuring updates are reviewed by appropriate personnel
Review Checklist
  • Have there been changes to relevant laws (Privacy Act, Algorithm Charter)?
  • Have we adopted new technologies that impact data privacy or AI ethics?
  • Have there been organisational changes to data handling practices?
  • Have we had any data breaches or near misses?
  • Are employees up-to-date on data privacy and AI ethics?
  • Have we received feedback or complaints about our practices?
Reference

Glossary of terms

AI Model A computer program that can learn from data and make predictions or decisions.
Algorithm A set of rules or instructions that a computer follows to solve a problem or perform a task.
Bias Systematic errors in an AI system that lead to unfair or discriminatory outcomes.
Data Subject An individual who can be identified from personal data.
Data Minimisation Collecting and processing only the personal data necessary for the intended purpose.
Encryption Converting data into a code to prevent unauthorised access.
Explainable AI (XAI) AI systems designed to be transparent and understandable to humans.
GDPR The General Data Protection Regulation, a comprehensive data protection law in the European Union.
Machine Learning A type of AI that allows computers to learn from data without explicit programming.
Privacy by Design Incorporating data protection principles into the design and development of systems and processes.

This guide provides the framework. The next step is making it yours. Customise the content to reflect your company's values, culture, and AI objectives. Engage key stakeholders throughout the process and consult legal professionals for tailored advice.

Download the Template

By downloading, you agree to receive occasional updates from Belton IT Nexus. You can unsubscribe at any time.

Need help implementing your AI policy?

Our team can help you develop, customise, and implement an AI & data governance framework tailored to your business.