“Digital meetings have become the norm in the legal profession. But how aware are we really of where the conversations are stored, who has access to them, and what happens to the data afterward?” ask Silvija Seres and Beate Skjerven Nygardshaug.
In just a few years, video meetings have gone from being a backup tool to the main stage for communication. Client conversations, negotiations, internal assessments – most meetings now happen on screen. It has made daily work easier. But it has also moved confidentiality out of the controlled meeting room and into a digital landscape, where the boundaries of responsibility, ownership, and compliance are far less clear.
When the conversation takes on a life of its own
A meeting is no longer just what’s said in the room. Today’s meeting platforms generate layer upon layer of data: participant lists, network information, screen sharing, audio, video, chat, transcripts, and AI-generated summaries. Everything is stored, usually automatically. And everything is processed, often without the user knowing how, where, or by whom.
In practice, this means that a client’s confidential information could be stored in data centers far outside the EU. Conversations could be analyzed by models that no one has evaluated. And meeting summaries could end up in the wrong hands – not necessarily out of ill intent, but because the platform is configured that way. These digital solutions are designed for efficiency, not legal accountability.
The human error that exposed a system failure
In 2025, the world learned that a U.S. officer had mistakenly added the wrong person to an encrypted messaging group on Signal. The unauthorized participant had access to sensitive military communication for several weeks before the error was discovered. There was no technical vulnerability. No external attack. Just a simple human mistake – amplified by a system that lacked sufficient safeguards.
The same thing can happen here. A link is forwarded. Someone logs in with the wrong account. The camera is off. The name looks familiar. No one reacts. And the meeting continues.
When we leave confidentiality to the platform’s default settings, we also shift the responsibility away from ourselves. It’s rarely a conscious decision – but it has consequences all the same.
Legal accountability in a technological gray zone
Confidentiality and compliance require awareness and control over data. But many platforms in use today are built on an architecture that prioritizes storage and processing – not necessarily putting the user first. This makes it difficult to know who has access to meeting data, how the information might be reused, or whether the conversation is being used to train a foreign AI model.
GDPR gave us a framework. Schrems II reminded us just how fragile the boundaries between jurisdictions really are. The upcoming EU AI Act demands explainability and risk classification. Yet many organizations, including in the legal sector, continue to use platforms where meeting content and metadata are processed outside of Norwegian or European control – and thus outside any clear legal accountability.
If we don’t know where our data is located, we don’t know who is responsible if something goes wrong.
A meeting must stand up to audit
There is technology that provides better control. Solutions where recordings and transcripts only happen when explicitly requested. Where meeting data is not sent out of the country. Where authentication is done via SSO, BankID, or smart card. Where every participant is verified, and you as the host know exactly who is in the meeting – and who isn’t.
These systems exist, and they work. The problem is, they’re rarely chosen. Not because they don’t work but because they’re lesser known, and perhaps not quite as “seamless.” In reality, it’s this seamlessness that’s the issue. When a meeting platform hides the complexity under the hood, it’s easy to assume everything is in order. But it’s in the absence of friction that risk increases.
If we want to maintain trust in our professional role, we must also be able to document that our conversations are handled properly – technically, ethically, and legally.
We’re not mature enough…yet
This isn’t about being anti-technology. It’s about recognizing that the pace of technological advancement has outstripped the profession’s maturity. We saw it with social media, where the public discourse changed before we had a chance to regulate influence. We saw it with encrypted messaging platforms that challenged investigations and evidence gathering. Now we’re seeing the same with AI and video – colliding with confidentiality, responsibility, and trust.
We’re using technology that changes how we interact, but we’re not adapting quickly enough in how we assess risk, responsibility, and compliance.
The role of the attorney assumes control over information. That still applies, even when the conversation happens over video. The digital meeting room is here to stay. It must now stand up to the same scrutiny we expect from our clients.
AI-assisted translation from the original article in Norwegian, published on advokatbladet.no, reviewed for clarity and context by a human.
- Judicial
- Enterprise
- Meet & collaborate securely
- Secure Meetings