The field of Artificial Intelligence (AI) has advanced rapidly in the past decade, which is in line with the increase in computer processing power and the proliferation of data.
This progress in AI is also transforming the information security domain which has both good and bad aspects. The system designers and implementers are able to strengthen their systems against adversaries by providing stronger intrusion detection and mitigation strategies, and also learning and extracting malicious patterns that would be impossible to observe for a human expert. Additionally, AI can be used to automate tasks that would otherwise be very costly and where humans are prone to make errors.
However, at the same time, the attackers can use the same set of artificial intelligence techniques to make their attacks faster, better structured, and more powerful. The challenges are especially growing if we consider for instance the Internet of Things (IoT) applications with its proliferation of sensors or trusted autonomy. We require our systems to be efficient and resilient to attacks, and we take those requirements for granted. At the same time, designing such systems is very challenging and any (even the tiniest) mistake can result in significant financial damages, safety/security breaches, and accordingly downgrade of public trust in such systems.
To prevent those outcomes, a systematic evaluation of various aspects of artificial intelligence and security is necessary.
The goal of the workshop is to bring together researchers working in areas of artificial intelligence and security to foster intensive interaction between those communities for next-generation research on AI and security.
We plan to divide the workshop into two main categories: 1) AI as an offensive tool in security, and 2) AI as a constructive (defensive) tool in security. For both of those categories, there are several research questions we consider to be of utmost importance:
1. What are the different domains of security where AI can be used and what are the intersections of those domains?
2. How can we use more advanced concepts from AI in security applications?
3. How to automatize the security processes by AI?
4. How to improve the collaborations between AI researchers and security researchers?
5. How to improve the relevance and reproducibility of the research?
While we expect that the attendees will also help arrange the final selection of topics we cover, some topics wee consider are:
1. Implementation attacks (side-channel analysis, fault injection attacks, microarchitectural attacks, etc.)
2. Attacks on Physically Unclonable Functions (PUFs).
3. Hardware Trojans.
4. Network/host intrusion detection.
5. The role of adversarial learning in security and privacy.
The workshop will feature invited talks, plenary discussions, contributed talks, and discussions in small groups.
We will have a plenary discussion about industrial needs/topics. There, we aim to understand the differences in academic and real-world scenarios and how to add more realistic objectives in current experiment settings. We plan to have several participants from the industry that will be able to share their views. Finally, there will be also time for networking as there will be several social events organized throughout the week.