Information on the processing of personal data

Before you give your consent in the sign-up form, we are obligated to inform about the processing of personal data based on Databeskyttelsesforordningen.

We are obligated to inform you about the details of our processing of your data and furthermore inform you about your rights in reference to this.

 

Legal basis

Our processing of personal data is based on article 6, paragraph 1, item a (consent) of Databeskyttelsesloven.

The data will be stored safely with Aarhus Kommune for as long as you wish to receive news from Tech City Aarhus. Your data will be deleted permanently if you choose to unsubscribe. The information will not be shared with others and data will exclusively be processed by Aarhus Kommune Erhverv.

 

Your rights

You have the right to request insight in the information we hold about you.

You have the right to request eligibility or deletion of the information.

 

Who is using your information?

The data responsible is part of Aarhus Kommune and your personal data is processed only by:

 

Data responsible:

Borgmesterens Afdeling
Erhverv og Bæredygtig Udvikling
Aarhus Kommune Erhverv

Rådhuspladsen 2
8000 Aarhus C

E-mail: aarhuskommuneerhverv@aarhus.dk
Telephone: +45 89 40 22 00

 

If you have any questions in connection to the processing of your data by Aarhus Kommune, please contact the advisor of data protection of Aarhus Kommune at: databeskyttelsesraadgiver@aarhus.dk

Finally, we will inform you that it is possible to address a complaint to Datatilsynet about our processing of personal data at: www.datatilsynet.dk

Top Security Issues Alexandra Instituttet

Details

Date:

Time:

9:30- 11:00

Webinar

Tec Five-Tec-O-Meter

Michael Rømer Bennike, Zaruhi Aslanyan & Rasmus Larsen

Top security issues with Generative AI

LLM-based solutions such as ChatBots and ChatGPT have recently become very popular, and organisations are scrambling to incorporate AI to improve their productivity and competitive edge. But LLMs also introduce new attack vectors, that must be mitigated to avoid introducing vulnerabilities.

 

Do you develop Large Language Models (LLMs), plugins, or software that is based on an LLM?

 

If so, you should take steps to mitigate the most common vulnerabilities that can compromise the LLM, its users, or the company hosting the application.

 

This webinar is based on the new OWASP top 10 for LLM vulnerabilities, in which security and AI experts from the Alexandra Institute will outline what you as the developer should be aware of, and how you can protect yourself and your organisation from the most common LLM vulnerabilities.

 

The webinar is aimed at:

Software developers

Managers of products based on LLM technologies, such as chatbots, voice recognition software, text to speech software, image generation, etc.

and Tech-savvy IT professionals