AI and the law: Navigating the new world

https://questlegal.com.au/wp-content/uploads/2025/01/Quest-Legal-Banner-Jan25.jpg

The rise of artificial intelligence (AI) has delivered a wide range of fantastic tools. It’s clear the technology is here to stay, and there’s no question that it offers incredible opportunities for change. However, as businesses increasingly integrate AI into their operations, they should be aware of – and actively address – the critical legal and ethical risks that come with it.

Unprecedented tech, unprecedented risk

From intelligent chatbots to generative video tools and even bot-generated disinformation, the power and potential of AI is already significant. And it’s easy to see how these developments can post some serious legal challenges around AI and law, in areas such as:

  • Information accuracy: How can you know what an AI generates is factually correct? The truth is, in its present state, it often gets things wrong. This means if you use its outputs as fact, you may end up liable.
  • Confidentiality: When you input data into an AI system, where does this information go? And who else can access it? For sensitive business or client information, this could be considered a breach of confidentiality.
  • Ethical and legal compliance: Everything an AI outputs should adhere to applicable laws and ethical standards. But this is a difficult thing to monitor, so you can’t guarantee your use in a business setting is legally sound.
  • Bias and fairness: Algorithms take aggregates of existing content, and the result is exactly that; an average. So, if there are inherent biases out in the world, they could unintentionally be reinforced and lead to discrimination.
  • Intellectual Property (IP): Ownership and intellectual property rights is an ongoing issue in the realm of IT. Copyright needs to be held by a human being – which right now makes ownership of AI-generated content a very murky area.
  • Liability and accountability: Similarly, if an AI system produces harmful, offensive or incorrect content, who’s responsible? And if a law is breached, who faces the consequences?

Copyright and ChatGPT

You might already be using OpenAI’s ChatGPT in your day-to-day. If you read the terms and conditions, you retain the rights to your ‘input’, and the right, title and interest in the ‘output’ is assigned to you. However, it also states that this output is not necessarily unique among users – and you can’t own an output that another user has generated. Long story short, if you’re generating materials through ChatGPT you intend to copyright, it’s not a straightforward process.

Ethical use of AI

If your business is adopting, or thinking about adopting, AI, these are some key factors to consider to ensure it’s used under law, ethically and fairly.

  • Do you have clear guidelines on what information staff can input into AI systems?
  • Is there a risk that the data you input could be accessed by others?
  • Do your clients know you’re using AI, and could this compromise their trust in your services?
  • Will you take the time or do you have the resources to verify the accuracy and impartiality of AI outputs?
  • Should you include clauses in your terms and conditions to prohibit clients from entering your business information into AI tools?

Addressing these issues proactively can help you build a robust framework for ethical AI use.

Developing an acceptable use policy

Another recommended strategy to mitigate risk and build trust is to create an AI acceptable use policy for AI. This should include:

  • Guidelines on acceptable input and output use.
  • Clear training for staff to avoid inputting sensitive customer data into open-source AI tools.
  • Always opting for closed-loop systems to ensure data remains secure.

The Australian government has proposed 10 mandatory guardrails for AI in high-risk settings, which provides useful guidance for businesses and a clear reference point for implementation.

  1. Establishing accountability processes for AI oversight.
  2. Implementing risk management practices.
  3. Protecting AI systems through robust data governance.
  4. Testing and monitoring AI models to ensure reliability.
  5. Enabling meaningful human oversight of AI decisions.
  6. Informing users about AI-enabled decisions and content.
  7. Creating processes for impacted individuals to challenge AI outcomes.
  8. Ensuring transparency across the AI supply chain.
  9. Maintaining detailed compliance records.
  10. Conducting conformity assessments to certify adherence to guardrails.

What it means for lawyers

Recently, the Law Society of New South Wales, the Legal Practice Board of Western Australia, and the Victorian Legal Services Board and Commissioner jointly released a Statement on the Use of Artificial Intelligence in Australian Legal Practice that provides guidance for legal professionals looking to leverage AI.

Echoing our above sentiment, it emphasises that the onus is on the individual to uphold ethical standards and obligations – and Australian law practitioners should be mindful that they “cannot safely enter” sensitive information into public AI engines like ChatGPT.

Read our article for more on this new development.

Need a hand?

There’s no question that AI can be useful, but it must be used responsibly. Quest Legal can help you develop a strong AI strategy that prioritises ethics and compliance with the law. Whether it’s crafting a customised AI use policy or addressing IP concerns, we’re here to help your business navigate the future of AI and law with confidence.

Get in touch with us today.

https://questlegal.com.au/wp-content/uploads/2024/08/Quest-Legal-Banner-AUG24-IP-Rights-3.jpg