Transparency & Fact-Checking
We are committed to accuracy, transparency, and accountability in all content presented on AI for Me. This page outlines our fact-checking process and sources.
We are committed to accuracy, transparency, and accountability in all content presented on AI for Me. This page outlines our fact-checking process and sources.
All claims about AI capabilities, safety practices, and learning outcomes are grounded in peer-reviewed research and industry standards. We distinguish between verifiable facts, operational specifications, and aspirational programme goals.
Content has been fact-checked against academic publications, industry guidelines (IEEE, Partnership on AI, UNESCO), and professional standards including Australian Public Service requirements.
We make no exaggerated claims about AI capabilities. Safety-first positioning is evidence-based. Individual outcomes may vary based on engagement and application.
Fogg, B. J. (2019). Tiny Habits: The Small Changes That Change Everything. Houghton Mifflin Harcourt.
Weidinger, L., et al. (2021). Ethical and social risks of harm from Language Models. arXiv preprint.
Partnership on AI (2023). AI Governance: Practical Guidance for Organizations.
IEEE (2023). IEEE Standards for AI - Safety and Ethics Guidelines.
Australian Public Service (APS) Digital Governance Framework & Risk Management Standards (ISO 31000).
Statements grounded in peer-reviewed research or established industry practice:
Concrete programme details that define delivery and commitment:
Programme outcomes that participants work toward achieving:
Systematic extraction of factual, operational, and aspirational statements.
Cross-reference against peer-reviewed research, industry standards, and professional guidelines.
Classify as verified, operational, aspirational, or remove if unsupported.
Document findings in transparent report with source citations.
We welcome questions about our content claims and verification methodology. Contact us directly:
privacy@wahub.ai