Trust perceptions are holding back New Zealand’s AI adoption.

New Zealand finds itself at a critical juncture in AI adoption, with public perception significantly influenced by privacy concerns. The Ipsos NZ AI Monitor 2024 positions New Zealand as the second most AI-nervous country surveyed, just behind Ireland. This caution is reflected in organisational adoption rates, with New Zealand ranking seventh out of nine countries in the Asia Pacific region for GenAI implementation.

In the latest University of Melbourne | KPMG Trust in AI Report - New Zealand ranks 3rd to last in Trust perceptions.



The reluctance stems from several key concerns that New Zealanders hold about AI:

  • 54% believe AI will worsen the spread of false information, the second-highest level of concern globally

  • 85% worry about privacy issues related to AI systems

  • 85% are concerned about security vulnerabilities in AI platforms

  • 87% express anxiety about reliability and errors in AI outputs

Despite these concerns, 69% of New Zealanders claim to have a good understanding of AI, higher than the global average of 67%. However, only 40% say they know which products and services actually use AI, indicating a knowledge gap that likely contributes to misconceptions.

Clarifying Key Misconceptions About AI Data Usage

Misconception 1: "Everything shared with AI assistants becomes public"

This widespread belief that private conversations with AI tools somehow become publicly available is fundamentally incorrect. Major AI companies have strict privacy policies that explicitly prohibit making user conversations public without consent.

Reality: AI assistants like ChatGPT, Claude, and Perplexity operate under privacy policies that restrict how user data can be accessed. For instance, OpenAI states clearly that they "do not use public information on the internet to build profiles about people, advertise to or target them, or to sell user data". User conversations remain private unless explicitly shared by the user themselves.

Misconception 2: "All data uploaded to AI platforms is used to train models"

Many New Zealanders believe that everything they share with AI assistants automatically becomes training data for future model iterations, potentially exposing their sensitive information.

Reality: Most major AI providers now offer opt-out options for training. OpenAI implemented a policy in March 2023 where "data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in)". Similarly, Anthropic (Claude) states: "We will not use your Inputs or Outputs to train our generative models (i.e. Claude), unless you've explicitly reported the materials to us or you've explicitly opted in to training". Perplexity allows users to opt out via settings.

Misconception 3: "Private information becomes discoverable by others"

There's a persistent fear that confidential information shared with AI tools will somehow leak to other users or become available through specific prompting techniques.

Reality: Modern AI models don't simply regurgitate training data. As OpenAI explains: "Our models generate new words each time they are asked a question. They don't store information in a database for recalling later or 'copy and paste' training information when responding to questions". Systems are designed with safeguards to prevent one user's information from being shared with others.

How Major AI Platforms Actually Handle User Data

OpenAI (ChatGPT)

OpenAI has evolved its data policies significantly in response to privacy concerns:

  • ChatGPT Free and Plus users can control whether their data contributes to future model improvements through their settings

  • "Temporary Chats" feature ensures those conversations aren't used for training models

  • Enterprise and Team versions explicitly do not use customer data for training by default

  • API data may be retained for up to 30 days for abuse monitoring, after which it is deleted

  • For trusted customers with sensitive applications, zero data retention options are available

Anthropic (Claude)

Anthropic has positioned itself with strong privacy-first policies:

  • Does not use customer inputs or outputs to train its generative models unless users explicitly opt in

  • For business customers, "Anthropic only processes the data as instructed by the Customer in order to provide the Claude service. Anthropic does not use the data to train generative models"

  • When users provide feedback, Anthropic de-links the feedback from user IDs before using it

  • Flagged conversations that violate usage policies may be analysed to improve safety systems, but not to train the main model

Perplexity

As a newer player in the AI assistant space:

  • Collects data from user devices and interactions with their sit

  • Does not sell, trade, or share personal information with third parties, except as outlined in their privacy policy

  • Provides an opt-out option via the AI Data Usage toggle in user settings

  • As with other platforms, users retain control over how their data is used for improving the service

Building Trust in AI for Business Use

Organisations that implement robust AI governance see tangible benefits. According to Deloitte's research across Asia Pacific:

  • Organizations with mature AI governance frameworks report a 28% increase in staff using AI solutions

  • These organizations experience nearly 5% higher revenue growth compared to those with less established governance

  • 45% report that enhanced AI governance positively impacts their reputation among customers

However, there's significant room for improvement, as 91% of surveyed organisations were categorized as having only "basic" or "in progress" governance structures. In New Zealand specifically, just under half of employees have the skills to use AI in an ethically and legally compliant way.

The Deloitte Trustworthy AI Framework outlines seven dimensions necessary for building trust in AI solutions:

  1. Transparent and explainable

  2. Fair and impartial

  3. Robust and reliable

  4. Respectful of privacy

  5. Safe and secure

  6. Responsible

  7. Accountable


Practical Approaches for Addressing AI Trust Issues

For Business Leaders

  1. Implement governance frameworks: Develop clear policies around AI use, data handling, and privacy protection that align with New Zealand's Privacy Act and cultural considerations.

  2. Choose enterprise versions: For sensitive business information, consider enterprise versions of AI tools that provide enhanced privacy protections and don't use data for model training.

  3. Educate on platform settings: Ensure employees understand how to use privacy features like ChatGPT's "Temporary Chats" or platform opt-out settings when handling sensitive information.

  4. Verify vendor claims: Review and verify the privacy policies of AI providers and request detailed information about data handling practices when necessary.


Conclusion

The trust issues surrounding AI in New Zealand represent both a challenge and an opportunity for businesses. By addressing misconceptions with accurate information and implementing strong governance frameworks, organisations can overcome privacy concerns and realize the benefits of AI adoption.

While New Zealanders' caution around AI is understandable, the reality of how major platforms handle data is considerably more nuanced and protected than commonly perceived. By helping business leaders and employees understand the actual data practices, privacy controls, and regulatory landscape, you can facilitate more informed decisions about AI adoption that balance innovation with appropriate privacy safeguards.

As AI continues to evolve, maintaining transparency about data usage policies and demonstrating commitment to ethical AI practices will be essential for building and maintaining trust in these powerful technologies.

Justin Flitter

Founder of NewZealand.AI.

http://unrivaled.co.nz
Next
Next

Introducing the AI Ambassador Course: Equip Your Team for the AI Era.