The last news

Maintenance OneBase and Management Portal

On Thursday, December 14, 2023, maintenance will be carried out on OneBase and the Management Portal between 7:00 AM and 8:30 AM. During maintenance it is temporarily not possible to view current consumption overviews. High usage and fraud detection and call data are used

Read more "

GRIP reduced available

Engineers report problems with the use of GRIP and/or applications available through GRIP. An exact impact assessment is currently still being carried out. Engineers have been sent for investigation and recovery. A next update will follow around 1:00 PM or sooner if available.
Read more "

AI in the workplace: DTC and police share tips


Secure digital business

As an entrepreneur or security manager, would you like to receive notifications of serious cyber threats to companies in your mailbox? Then join the DTC Community .
To support entrepreneurs, there is also a wide range of cybersecurity information and a toolbox with cyber tools . Want to test whether you already have the basics in order? Take the CyberSafe Check for self-employed persons and SMEs .

Read more "

AI in the workplace: DTC and police share tips

In collaboration with the police, the Digital Trust Center (DTC) warns about the risks of using generative artificial intelligence (AI) in the workplace. They also share tips on how both employees and employers can safely use AI. Human contact is the key here.

While AI tools such as text generators and image creation apps offer entrepreneurs significant (efficiency) benefits, there is also a dark side to these technologies. Cybercriminals can also use these tools for fraudulent practices .

Identity fraud, such as CEO fraud.
For example, AI can be used to clone a voice or to create realistic texts. Spreading disinformation.
Language model ChatGPT produces authentic-looking texts at scale and with great speed. Such a language model can help criminals for propaganda and disinformation purposes. Malware. ChatGPT is capable of producing codes in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code (such as malware).

Manon den Dunnen, Strategic Digital Specialist at the police, emphasizes the importance of being vigilant when using AI yourself: “If you wouldn't put it on LinkedIn, you shouldn't put it on ChatGPT either. Because that system trains itself with the information you enter and before you know it, your information appears in texts generated for others. That's why companies like Samsung have banned their employees from using it.”

Tips for dealing with artificial intelligence and cybercriminals who use it:

It is best to have confidential conversations in person.
Never enter confidential data into ChatGPT or similar language models.
So no names of people either. Be aware that the systems are aimed at generating texts 'that resemble'. It is not a search engine, there is no database behind it, so do not use it if factuality is important. If you have any doubts about the identity of the person on the phone, you can suggest calling back.
Another option is to ask an experience question. For example: How was your conversation yesterday? Agreements can be made, for example, to only handle invoices if there is an opportunity to check the source.
Investigate which solutions you can implement in coordination with partners in the chain to determine the authenticity of the sender of invoices or other important communications.
Refer back to advice relevant to, for example, phishing or CEO fraud. These forms of cyber incidents remain basically the same, even if AI is used as a tool. Know what questions to ask when purchasing software. For example: How does this software use artificial intelligence, how is it trained, what happens to this data and what security issues are involved?

Read more "

mShield interruption

Engineers notice an increased call volume on the mShield service. Engineers are investigating. A network component is now being restarted. We will keep you informed.
Read more "