𝘈𝘱𝘱𝘳𝘰𝘹 𝘳𝘦𝘢𝘥 𝘵𝘪𝘮𝘦: 4 𝘮𝘪𝘯𝘴🕒
Could this world first legal framework shake up how AI in media is considered?
Only time will tell, but if you’re using AI in your media content, the main takeaway is that it must be clearly labelled as AI-generated, such as AI voiceovers.
Perhaps you have come across the EU AI Act over the few last months and you’re curious to know more. Well, you’re in the right place.
From our position as a media accessibility and localisation company since 2014, we want readers, clients and potential clients to be as in the know as possible about the EU AI Act.
So, in this blog, we will explore:
- The EU AI Act in more detail
- When does it take effect?
- How does it impact businesses?
Let’s dive in!
What is the EU AI Act? In more detail
This EU regulation, which is the world’s first for AI, sets strict compliance standards and categories AI systems into risk levels.
When it comes to media accessibility and our services specifically, AI voiceovers and AI‑generated dubbing are typically classified as limited risk AI systems. This means they are allowed, but are subject to transparency and disclosure obligations, rather than outright restrictions.
For public facing content such as advertisements, marketing videos and films, the EU AI Act requires that AI‑generated or AI‑manipulated audio, images and video content is clearly disclosed where it could otherwise mislead audiences. This is part of transparency obligations.
If it’s not obvious from the context that the voice is synthetic and a reasonable person could think it is a real human voice, then disclosure is needed.
Simply, that can look like labelling an AI voiceover (synthetic audio) as AI-generated with text on the screen. Or labelling an AI dub too.
For audio-only formats, such as a podcast or audio book, an audible disclosure must be used.
AI voice cloning almost always requires disclosure.
Therefore, businesses using these forms of media can be compliant with the EU AI Act.
Note that if AI assists, restores or conveys content without altering identity or meaning, it generally does not need to be labelled.
When does the EU AI Act start?
The EU AI Act will be fully enforceable from 2 August, 2026. It came into force on 1 August, 2024, but most obligations are phased in over time. After August 2026, enforcement powers and fines can be applied.
Do UK businesses need to comply with the EU AI Act?
While EU regulations don’t automatically apply in the UK anymore, UK businesses will need to comply if they operate in the EU.
For example, if a Swansea-based company created an advert using AI that was shown, heard, or targeted to users in the EU, it would need to comply with the EU AI Act. Likewise, if the campaign is run via an EU‑facing platform or distributed by an EU-based company, it would again need to
What are the implications of the EU AI Act for businesses?
We spoke to Head of VoiceBox, Sophie Muller, to get her take on how it impacts businesses and organisations.
“From my perspective, it is mainly going to impact trust and be a negative trust signal,” said Sophie.
“AI voiceovers are getting better. While they may lack emotion and variety in some cases, they can also be pretty realistic and hard to distinguish from a human voice at times.
“But even if it sounds like a human voice, if you use an AI voice for your advert, for example, and it’s not obvious, you will need to label it.
“The mere association of being AI-generated will, in my opinion, cause some audiences to have less trust in the product and not feel as aligned or close with it, therefore impacting buying.
“Of course, that could change over time if AI voiceovers become the norm, but for now, I believe that to be the case.
“We will always be transparent with AI usage in our workflows. Usually, we will give the client their options and provide our usual honest advice for which we recommend in that situation.
“It’s worth saying that at VoiceBox, we fully support the EU AI Act and how this will impact AI-generated voices. Transparency with audiences and being a responsible provider are all things that we are aligned with and we encourage the full enforcement of the Act from August 2026.”
Conclusion
Ultimately, with the EU AI Act, it doesn’t and won’t stop businesses from using AI in media or accessibility, but it makes them responsible for how it’s used, how transparent it is, and whether it respects people’s rights.
We look forward to seeing how the EU AI Act takes shape to encourage responsible use.
To discuss the EU AI Act in more detail and what it could mean for services you’re interested in, contact us today.
FAQs
The EAA (from June 28, 2025) mandates captions, subtitles, audio description, and accessible media services. Whereas the AI Act governs how the AI used to deliver those services is designed, trained, monitored, and corrected.
The EU AI Act does not replace the European Accessibility Act (EAA) or audiovisual media accessibility rules. Instead, it adds an AI‑specific compliance layer.
No. The EU AI Act does not require all AI‑generated media content to be labelled. Disclosure is required only where AI‑generated or AI‑manipulated audio, images or video could reasonably mislead audiences.
For example, AI voiceovers, AI dubbing or voice cloning used in public‑facing content such as adverts, marketing videos or certain films must be disclosed if it is not obvious from the context that the voice is synthetic. By contrast, AI used purely for assistive or technical purposes — such as captions, subtitles or accessibility features that do not alter identity or meaning — generally does not need labelling.
Businesses should start by auditing where AI is used across their media and accessibility workflows, including voiceovers, dubbing, translation and localisation. The key questions are: Is AI generating or manipulating audio or video that could look or sound human? Is the AI use obvious from context, or could it mislead audiences? Where disclosure would be required, is there a clear and consistent way to label content?
Although enforcement begins on 2 August 2026, preparing now allows organisations to adapt processes, update supplier contracts and make informed decisions about when AI is appropriate — and when human involvement is the safer option.
