AI Transparency Statement
Introduction
The Museum of Australian Democracy at Old Parliament House (MoAD) is committed to continuously improving our Artificial Intelligence (AI) capabilities. We will remain transparent as we adapt to changes in AI technology, the environment, and government policy requirements.
In this statement, we describe:
- How we use AI
- Why we use AI
- How we ensure quality and safety of data
Explanations for specific terms are included in the 'definitions' section of this document.
How we use AI
- MoAD uses AI in some of its Corporate and Enabling works. This includes some level of AI assistance in the area of contact and data matching (refer Definitions below)
- Generative AI is occasionally used in the following areas:
- planning stages of creative work
- the production of subtitles and narration for video and audio products. NB: content is always thoroughly proofed and edited by MoAD staff before publication
- generation of short form content for internal use (e.g. content or meeting summaries)
- data analytics and reporting
- cyber security monitoring and response activities
- Generative AI is not used to alter historical digital assets/records
Why we use AI
AI is not widely used across MoAD systems, however MoAD recognises that AI is an increasingly prevalent and critical component of many digital systems.
MoAD uses AI for its benefits in creating business efficiencies and reducing human error.
Where AI components have been incorporated into proprietary software or IT products used by MoAD, we will consider using these components if:
- there is a demonstrated benefit to MoAD's audience in bodies of work; and
- the data governance and compliance requirements are met.
How we ensure quality and safety of data
MoAD manages the quality and safety of data through the following controls:
- All use of AI at MoAD is subject to strict change and risk management processes to monitor impacts of change and new features;
- risks are reviewed periodically with set processes in place in the occurrence that a risk elevates above our agency risk tolerance;
- MoAD has additional backups and audit arrangements in place to ensure the integrity and confidentiality of our data;
- staff undertake mandatory training upon induction, with annual refresher training to ensure they comply with quality and safety requirements of MoAD's AI policy; and
- the AI policy aligns with the AI in government policy.
Government policy alignment
The content on this page aligns with the Digital Transformation Agency's policy on the responsible use of AI in government outlining how the Australian government plans to leverage AI opportunities while ensuring its safe and ethical use.
Statement review process
This statement will be reviewed annually, or when MoAD makes a significant change to its approach to AI, or when any new factor impacts this statement.
Contact
If you have any questions about MoAD's use of AI, please get in touch using our contact form.
Definitions
Artificial intelligence – Artificial intelligence (AI) is the ability of computer systems to perform tasks that typically require human intelligence, such as: learning, problem-solving, and decision-making.
Contact and data matching – refers to the process of identifying and linking records that represent the same entity across different data sources. This ensures that information is accurate, consistent, and free from duplicates.
Corporate and enabling – to enhance functions by automating processes, optimising resource allocation, and boosting operational efficiency. For compliance and fraud detection, it aids in recognising patterns within records to ensure adherence to laws and regulations.
Generative AI – a type of AI that can create new content, such as text, images, videos, music, and code
Last updated: 28/02/2025