Did you know that a recent survey1 showed that 53% of Gen Z respondents said they had used generative AI tools like ChatGPT or Dalle-E? And more than a quarter said they’ve used it for work. And in spite of this, only 1 in 5 were aware of the cyber security and privacy risks.
If your workplace could be impacted, it’s time to update your IT policy and get some training in place to ensure your employees know when and how they should be using this new tool.
Artificial intelligence is a game-changer but like all powerful things, you need to think about how your organisation uses it effectively. Will employees use AI in their client interactions, or should it be mainly for internal use? While AI could efficiently summarise a lengthy article, you may not want it to examine a contract worth thousands of dollars.
The risks
It also pays to think about the inherent risks of AI. Are you aware of how to manage the risks of privacy breaches, source accuracy, and alignment with your organisation’s standards? For instance, ChatGPT 3.5 ceased “learning” in 2021, so the information may be out of date.
You also need to think about how using AI would affect your organisation’s reputation. Using AI might be OK for a construction company, but it could really damage a marketing agency if revealed.
Training, training, training
From an HR perspective, it’s important that you let your team know how, and when, to use AI. This means that training is essential. You need to show they know how to navigate potential pitfalls. How will they ensure they don’t inadvertently disclose sensitive information? Do they know how to check their sources? The training ensures your staff know how to mitigate risks, and that you can take swift action if the policy is breached.
Next steps
Do you need to get set up for AI? Give us a call so we can guide you through the process.