James Henderson

Assessing the risks and rewards of AI in Australia

Ask ChatGPT to outline the biggest pros and cons of artificial intelligence (AI) and the popular tool built on a large language model (LLM) will – after a few milliseconds of thinking – advocate in favour of itself.

Articulating 13 rewards compared to 11 risks, even such a surface-level exercise demonstrates the inextricable link between the upsides and downsides of a technology capturing enterprise imagination at an unrelenting pace.

“Often, there’s this connection between AI and risk,” acknowledged Stela Solar, Director of the National AI Centre, hosted by CSIRO’s Data61. “But the biggest risk for organisations today is not going on the AI journey.”

Kick-starting an influential panel of AI experts at Dell Technologies Forum in Melbourne, Solar shared that Australian businesses embracing AI are experiencing higher performance in terms of customer experience, speed to innovation and accelerated decision making.

“All of those things actually mean creating a competitive advantage,” Solar said. “No organisation wants to miss out on that so that’s the huge risk.”

(L-R): David Siroky (Dell Technologies); Jason Murrell; Sue Keay (BHP) and Stela Solar (CSIRO)

Despite holding the mandate of building value for Australian “people, businesses and the country” through the use of AI, Solar stopped short of offering a blazing endorsement.

Enthusiasm is counterbalanced by an acceptance that AI must be utilised responsibly. For many innovative businesses open to new ideas and technologies, this is when the problems start to mount.

According to Solar, the AI debate in Australia can best be divided into three core issues:

  • Generative AI: Some reports suggest that anywhere between 30-40% of employees within an organisation are using or trying to use generative AI for work but only 68% of those are telling anyone about it. The use of a new tool could be a productivity signal suggesting more creative and efficient ways of doing something but if a business is unaware, how can they govern that? How can they implement policies to address that?

Action: Create a generative AI policy for the business because irrespective of the company’s position, employees are likely going to use the tool regardless.

  • Risk assessment framework: It’s very easy to bucket AI into either all risk or all brightness and optimism. Truth is, the technology is a mix and a spectrum of use cases exists so implement a risk assessment framework.

Action: Develop an in-house framework or use an existing one but find a way to triage the various AI use cases in the market. Then move forward with the low risk ones capable of creating competitive advantage.

  • Hire an AI leader: When a CEO leads the AI strategy of an organisation, that organisation becomes more responsible with its use of AI. AI activation is holistic so avoid making this technology siloed.

Action: Senior executive leadership is critical so recruit an AI leader capable of providing the required ownership and authority to implement this technology responsibly.

AI in action, positive advancements

In assessing AI enthusiasm, Solar said Australia has “slammed on the accelerator” in terms of adoption with the landscape now “shifting so rapidly”.

One such example is Sapia.ai, a Melbourne-based recruitment start-up maximising the power of AI to remove bias from the hiring process.

The core engine that drives Sapia.ai is an Ai Smart Interviewer which automates scheduling, screening, interviewing and assessing. Potential candidates answer questions via an advanced chatbot that resembles a responsive help desk with results then shared to the employer in the form of a score and profile trait assessment, in addition to a curated shortlist of top candidates.

“This is an Australian company developing chatbots for the early stage interview process,” Solar explained. “So rather than talking with a person during the first interview, candidates will interact with a chatbot.

“What’s most intriguing is that by using AI in this specific use case, some biases can be removed from the process. Often we associate AI with bias but in this instance, it’s actually reversing it.”

Notably, a chatbot is more likely to hire a woman than a bodied interviewer during an initial face-to-face interview. Plus, when women realised they would be assessed by chatbots during the first interview, 30% more females applied.

“It’s amazing to think that there’s people out there who believe they will be more fairly treated by AI rather than a person,” Solar said. “Bias is all around us and this is an example of AI actually tackling that and helping us drive innovation and gain that competitive advantage.”

Equity is critical in ensuring AI reaches as many people as possible in Australia but despite Sapia.ai standing tall as a leading homegrown example of this, Solar acknowledged that many gender prejudice hurdles still remain.

“Diversity or a lack of diversity in data is an issue,” she outlined. “Data by default is historical and there have been gaps in data collection and biases in how it’s collected.”

To address this, Solar recommended intercepting such biases by building diverse teams when designing the technology, citing the work of HIVE, an offering pioneered by Royal Perth Hospital.

Known as Health in a Virtual Environment, the solution represents an innovative way of providing additional care for patients.

By using state-of-the-art technology, expert clinicians within the HIVE support the clinicians at the bedside of patients by providing 24/7 remote monitoring.

The HIVE uses AI to detect early signs of vitals worsening and instantly alerts relevant clinicians, who use a two-way audio-visual system to collaborate with the ward team to provide immediate care.

(L-R): Stela Solar (CSIRO); David Siroky (Dell Technologies); Sue Keay (BHP) and Jason Murrell

“A pod of four medical professionals are using this technology to monitor 200 patients,” Solar explained. “This solution is about expanding the reach of medical expertise across many patients and solving the resource gap when providing quality care.

“HIVE is dealing with very sensitive data, data that is supposed to inform a decision. But those models might be biased by the data upon which they were built by.”

To remedy this, HIVE created a community advisory group – viewed as general best practice – to co-design solutions with the communities and customers that they serve.

“What was really impressive was when they connected this advisory group to the cadence of a machine learning lifecycle,” Solar said. “So whenever there was data drift and the machine learning model might suggest changes, that would trigger a meeting of the advisory group.

“They would then discuss if action was needed and if any gaps or biases in the data were present. That is one way to intercept the bias.”

Another example of how AI is being positively accelerated is within the mining sector, with BHP at the forefront of autonomous haulage technology through the deployment of driverless trucks.

“For many years, the mining industry has had autonomous load haulage vehicles or big robot trucks but if you visit a mine site today, you’d probably wonder where all the other robots were given we’ve had 20 years to develop them,” shared Sue Keay, Robotics Technology Lead at BHP.

But according to Keay – who is also Chair of the Robotics Australia Group – the reality of automating a system is much more complex.

“You often have to take quite some time to achieve this,” she outlined. “You have to develop the platforms but also potentially wait until your existing assets reach the end of life. But by introducing self-driving machinery kits, you can pretty much start your automation journey tomorrow, or at least rapidly.”

This approach makes use of the assets already in play on site and converts them into either tele-operated or autonomous machines.

“This can can help accelerate the adoption of AI across the industry,” Keay added.

Assessing AI risks

Despite a wave of AI enthusiasm flooding through the Australian market, the panel outlined the downside of AI in the context of:

  1. Security
  2. Hallucinations
  3. Jobs

Specific to security, Jason Murrell – speaking as former Group Executive of AustCyber – advised caution for businesses seeking to blindly embrace this new technology without due diligence.

“The risks are real,” he said.

Murrell cited the recent breach of ChatGPT as an obvious warning sign for organisations. The tool was taken offline in March due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history.

The same bug may have also caused the “unintentional visibility” of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window. This included details such as first and last name, email address, payment address, credit card type and the last four digits (only) of a credit card number, and credit card expiration date.

“Users are putting IP, legal contracts and confidential information into these types of chatbots rather than paying for a lawyer or expert,” Murrell said.

For Murrell, the adoption of AI is arguably more accelerated and advanced in the criminal rather than corporate world with cyber hackers embracing the technology at speed.

“They’re working on the dark web and working more collaboratively together – they are sharing insights,” Murrell outlined.

“We’re playing catch-up in that sense and if you look at Australia, three of the four biggest cyber breaches in our country have happened during the past 11 months or so. There’s more Australian data out there on a per capita basis that anywhere else in the world.”

Advising that “no silver bullet exists” – despite the bravado of some industry vendors – Murrell highlighted the work of Melbourne-based 6clicks in using AI and machine learning to revolutionise how organisations tackle the ongoing issue of governance, risk management and compliance (GRC).

The platform allows businesses to either implement a risk and compliance program or achieve compliance with ISO 27001, SOC 2, PCI-DSS, HIPAA, NIST, FedRamp and many other standards.

“6clicks launched four years ago and was very early in this area,” Murrell explained. “The technology simplifies GRC policy through a single pane of glass and provides a clear picture as to whether you’re being compliant or non-compliant.”

Attendees at Dell Technologies Forum in Melbourne

Another area of concern is hallucinations, evident through the recent fining of a New York-based lawyer who relied on ChatGPT for research in a personal injury case.

In June, Steven A. Schwartz – alongside fellow lawyer Peter LoDuca and law firm Levidow, Levidow & Oberman – were fined US$5,000 for submitting fake citations in a court filing.

“ChatGPT invented case law that didn’t exist,” stated David Siroky, General Manager of AI Data Analytics at Dell Technologies.

“These type of LLM tools are nothing more than a 3rd grader doing a maths test. At that age [8-9 years old], when I was asked to predict the next number in the sequence, if I didn’t know the answer then I’d just make something up and guess.”

According to Siroky, such an approach is at the core of the concern.

“LLMs are built to try and please the user, whether that’s by trying to predict the next sequence or sentence,” he outlined. “It’s not actually retrieving information from a database that is factually accurate which means these models are prone to hallucinations. This must be factored into your risk models and mitigated.”

As outlined by the World Economic Forum, the accelerated advancement of AI and subsequent emergence of LLMs at scale are combining to redefine the job landscape in the technology sector.

Super-charged by new tools and products – such as GitHub’s Copilot, Midjourney and ChatGPTsignificant labour market shifts are expected amid considerable economic and geopolitical upheaval.

“Humans are great all rounders,” Keay added. “In general, robotics and AI are good at augmenting what humans can do rather than actually replacing.”

Take a hospital as an example. Robots could be used to deliver linen or even food but they’re not being utilised to take medical readings and of course, they don’t speak to the families of patients.

“There’s a whole range of tasks that go into the work of a hospital and some will never be replaced,” Keay confirmed. “As humans, we can instead focus on higher value work that is creative, strategic and emotional.”

Keay said the “huge potential” of robotics and AI – which is sometimes lost in the noise – is the potential to scale.

Once a robot or AI system is trained to solve a particular problem, then all robots or AI systems know the solution. Then it becomes a matter of deciding which of the big challenges should be solved through this approach.

“The challenge is with the people, rather than the technology,” Keay accepted. “We need to see more examples of people having the opportunity to get these tools in their hands – just try them out. I would dearly love to see a robot version of ChatGPT that people can comfortably use and we’re getting to that point.”

For example, drone cages can be handled by an inexperienced operator and be up and running within 15 minutes. This could be used to take an individual away from a dangerous task when carrying out visual inspections.

Accelerating AI adoption

In response to increased market interest, Dell Technologies recently announced a joint initiative with Nvidia to help businesses build and use generative AI models through on-premises environments.

Known as Project Helix, the launch is designed to “quickly and securely” deliver improved customer service, market intelligence and enterprise search through AI.

The offering will deliver a series of full-stack solutions with technical expertise and pre-built tools based on Dell and Nvidia infrastructure and software. This includes a blueprint to help enterprises use proprietary data and more easily deploy generative AI “responsibly and accurately”.

“Project Helix is a response to what our customers have been asking for,” Siroky said.

According to Siroky, the offering is created around three core design areas:

  • Security: For organisations processing sensitive data – such as defence labs, hospitals or banks – often that data doesn’t exist outside of the physical building. Project Helix is designed to securely bring AI data processing capabilities to this environment.
  • Edge: In some scenarios, internet connectivity is challenging – whether that be at the Great Barrier Reef or in extreme heat on a racetrack. In this context, the ability to bring AI processing to the edge in a secure way is fundamental.
  • On-Premises: Machine learning and training can be an expensive exercise via the cloud, with on-premises AI training capable of being delivered for 50-75% less cost than through hyperscalers.

“Executive leadership is needed when laying the foundations of AI,” Siroky added. “This type of leadership is crucial in removing some of the roadblocks and taking full advantage of AI in Australia.”

SIGN UP FOR INSIGHTS VIA MOXIE MAIL

Inform your opinion with executive guidance, in-depth analysis and business commentary.