Yes. Legal and professional accountability rests with the person who represents the content as their own. Courts increasingly hold professionals responsible for AI-generated mistakes, including fabricated citations and factual errors.
Learn more about this in our recent AI Accountability & AI Hallucinations FAQ Blog here.
External Authority References
For deeper understanding of AI accountability frameworks, these high-authority resources provide comprehensive guidance:
- Carnegie Council – AI Accountability – Ethical frameworks for AI responsibility (DA: 61)
- Harvard DCE – Responsible AI Principles – Five key principles for organizational AI use (DA: 84)
- NTIA – AI Accountability Policy – Federal guidance on AI trustworthiness (DA: 91)
- Google Cloud – AI Hallucinations – Technical explanation of AI accuracy challenges (DA: 93)
- Wikipedia – AI Hallucinations – Comprehensive overview of AI accuracy issues (DA: 95)
- MIT Sloan – AI Hallucinations Guide – Academic perspective on AI limitations (DA: 81)
- Salesforce – AI Accountability – Enterprise approaches to AI governance (DA: 94)
Need help implementing responsible AI practices in your organization?
Contact Westech’s expert IT team at +27 11 519 4900 or visit our contact page to discuss your technology needs.
