Has the AI horse bolted? Or can we still instigate policies to maintain public safety?
This morning on TV1 Breakfast, we had a brief discussion on the rise of Generative AI.
Major concerns:
Given the exponential adoption rate of governments, is the NZ government moving slowly to deploy policy and regulation to protect the public from potential risks?
I suggest we need governments to collaborate around globally consistent policies, like watermarks for AI-generated synthetic content. We need greater transparency about what media has been generated by a machine.
The adoption curve for artificial intelligence in the last 12 months, has been the fastest of any new technology in history. But letβs not forget AI has been around since the 1950βs.
Itβs the combination of computing power, availability of vast amounts of data. So everything thats published on the internet. And new language learning models that hive given birth to Generative AI platforms like ChatGPT.
Competition between Microsoft who invested in OpenAI / ChatGPT and a slightly more conservative approach from Google is fierce. Today weβre at the very start of this adoption curve, and given the rapid pace, people, societies and governments are now racing to understand the positive and negative ramifications. Sparking concerns about the use of AI by bad actors or the opportunity for AI to act all on its own.
We all think about money. But do we have money or does money have us? We need to make sure we have AI. Not let AI own us.
We have the choice to take control or be apathetic and let it take control of us. We need to lean in and master AI. Or it will master us.
Education is a necessary process to help people experience first-hand how this new technology works.
Companies book us to present a lunch n learn or executive briefing to their teams. We start the discussion on what, why, and how these tools could benefit their work, or impact their lives. We give them the vocabulary to experiment with AI tools, to build confidence and understanding.