Single Post

Photo by Sanket Mishra: https://www.pexels.com/photo/webpage-of-chatgpt-a-prototype-ai-chatbot-is-seen-on-the-website-of-openai-on-a-smartphone-examples-capabilities-and-limitations-are-shown-16629368/

After a wave of viral social media posts claimed that ChatGPT had stopped providing legal, medical, and financial guidance, OpenAI has officially clarified the scope of what its AI model can and cannot do.

The clarification, which followed widespread confusion online, reaffirms that ChatGPT remains an educational tool, capable of explaining complex topics and sharing general information, but not a licensed professional adviser. The move reflects a growing emphasis on AI accountability, ethical compliance, and user protection as regulators worldwide grapple with how to govern generative AI in sensitive domains.

Background: Viral Claims and Media Reports

The controversy began when international outlet Nexta posted on X (formerly Twitter) that, effective October 29, 2025, ChatGPT had “stopped providing specific guidance on treatment, legal issues, and money,” labeling the system as strictly educational.

This prompted speculation that OpenAI had changed its terms of use to block legal, medical, or investment responses altogether. However, subsequent reviews by the media confirmed that no such new restriction had been implemented.

OpenAI’s Usage Policies, last updated October 29, explicitly prohibit:

“The provision of tailored advice that requires a licence, such as legal or medical advice, without appropriate involvement by a licensed professional.”

OpenAI Responds: “Not a New Change”

In a direct statement, Karan Singhal, OpenAI’s Head of Health AI, wrote on X:

“Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

This clarification reiterates OpenAI’s long-standing policy: while ChatGPT may explain concepts and summarize regulations or procedures, it will not offer personalised legal, medical, or financial guidance that could amount to professional advice or liability exposure.

Why the Clarification Matters

Although ChatGPT’s policies have not changed, the episode highlights a crucial issue, how AI models operate in regulated fields. Legal experts warn that without strict boundaries, AI tools could inadvertently cross into areas that require licensure, fiduciary duty, or regulatory oversight.

The distinction between “information” and “advice” remains central to compliance. For instance:

  • Explaining what UAE labour law says about notice periods is permitted.
  • Telling a user how to act in a specific legal dispute is not.

By publicly reinforcing this boundary, OpenAI aims to mitigate the risk of misuse, misinformation, and potential lawsuits, while maintaining user trust in its products.

Broader Legal and Regulatory Implications

The clarification comes at a time when AI developers face intensifying global scrutiny. OpenAI and other leading firms are already defending lawsuits from authors, publishers, and media organisations alleging copyright infringement during model training.

As legal exposure increases, companies are proactively tightening policy language to avoid liability, especially in high stakes sectors such as healthcare, finance, and law.

Industry analysts note that this is part of a broader movement toward regulated AI ecosystems, where ethical frameworks, audit trails, and human oversight are integrated into product design.

The UAE Perspective: Responsible AI Development

The UAE continues to position itself as a global leader in responsible AI adoption, balancing innovation with ethical regulation. Authorities such as the UAE Artificial Intelligence Office and the Dubai Digital Authority have emphasized that while AI can enhance access to information, it must operate within clearly defined legal parameters to protect consumers and uphold professional integrity.

As Al Kabban & Associates regularly advises clients across technology and data governance sectors, the firm underscores that both AI developers and users share a responsibility to ensure that automation does not replace regulated human expertise, particularly in law, medicine, or financial planning.

Al Kabban & Associates Commentary

OpenAI’s clarification serves as a timely reminder that AI literacy and compliance are as essential as technological advancement.

At Al Kabban & Associates, our Technology and AI Law Division advises businesses, startups, and digital platforms on:

  • Regulatory compliance under UAE and international data laws;
  • Ethical use of AI and automation tools;
  • Legal risk management and intellectual property protection.

For more information or to schedule a consultation, contact us at +971 4 453 9090 or visit www.alkabban.com

You can also follow us on social media for more updates on everything law related in the UAE: @Alkabban_Law

ALSO READ:

The Hidden Costs of AI-Drafted Contracts in the UAE: Why Skipping a Lawyer Can Cost You More

Denmark’s Deepfake Law vs UAE: Why Your Face Should Be Your Copyright

DIAC Partners with Jus Mundi to Integrate AI in Arbitration Services


Are You Looking for

Experienced Attorneys?

Get a free initial consultation right now