News: Sept 11th, 2016 - Participants section added. Looking forward to a next edition of the School!


Program (about 28 hours)

9:00-10:30 - First morning session
10:30-11:00 - Coffee break
11:00-12:30 - Second morning session
12:30-13:30 Lunch
13:30-15:00 - First afternoon session
15:00-15:30 - Coffee break
15:30-17:00 - Second afternoon session
  • Monday, September 5th
    • Morning: Machine Learning under Attack: Vulnerability Exploitation and Security Measures (Battista Biggio, Giorgio Giacinto)
    • Afternoon: Dynamic analysis of Malicious Android Apps (Lorenzo Cavallaro)
  • Tuesday, September 6th
    • Morning: De-anonymization and Machine Learning's Role in Privacy and Security (Aylin Caliskan-Islam)
    • Afternoon: Data Mining for Vulnerability Discovery (Fabian Yamaguchi)
  • Wednedsday, September 7th
    • Morning: The age of human hacking (Enrico Frumento)
    • Afternoon (first session): The age of human hacking (Enrico Frumento)
    • Evening: Guided Tour in Cagliari, Social Dinner
  • Thursday, September 8th
    • Morning: Empirical Validation of Risk and Security Requirements Methodologies (Fabio Massacci)
    • Afternoon: Who is bad? The hard task of developing security metrics on providers and what those metrics can teach us about the interplay of crime, markets and security (Michel Van Eeten)
  • Friday, September 9th
    • Morning: Web application security: from static analysis to dynamic protections and recovery (Miguel P. Correia)
    • Afternoon (first session): The OWASP testing guide v4 (Matteo Meucci)
    • Afternoon (second session): Threat Modeling with Pasta (Marco Morana)

« Main Program

De-anonymization and Machine Learning's Role in Privacy and Security (3 hours)

This lecture will present threats, open problems, and challenges through the lens of linguistic privacy. Humans learn language and its semantics on an individual basis and consequently develop unique styles. Such aspects of human behavior expressed in language can be characterized and quantified through language processing and machine learning. Linguistic features exhibited in natural or programming languages have the power to de-anonymize authors, programmers, or cyber criminals. Additionally, textual features observed in social networks shed insight into user privacy behavior. Being able to infer a persons identity or behavior through language processing poses a great threat for individuals privacy and anonymity. Moreover, natural language necessarily contains human biases that can be empirically demonstrated and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Consequently, applications that incorporate human data may lead to unfairness and disparate impact. Nevertheless, understanding these threats and challenges can make it possible to mitigate prejudice or reverse machine learning attacks as a countermeasure. This emerging area of linguistic privacy, along with its open problems, raise societal, ethical, and policy challenges.

Lecturer: Aylin Caliskan-Islam

« Main Program

Who is bad? The hard task of developing security metrics on providers and what those metrics can teach us about the interplay of crime, markets and security (3 hours)

A lot of security research studies the prevalence of compromise and crime across systems and services. Virtually all of it focuses on technical identifiers, rather than the actual actors administering these systems. Without understanding the complicated relationship between identifiers and actors, one cannot meaningfully measure security performance, identify the underlying economic incentives or study what interventions actually improve security. This talk presents work on mapping identifiers to actors, designing reputation metrics for security and thinking more empirically about economic incentives and interventions. It covers work on botnet mitigation by ISPs, fighting malicious hosting with law enforcement, and voluntary takedown of compromised sites by network operators.

Lecturer: Michel Van Eeten

« Main Program

Data Mining for Vulnerability Discovery (3 hours)

The discovery of vulnerabilities in source code is a key for securing computer systems. While specific types of security flaws can be identified automatically, in the general case the process of finding vulnerabilities cannot be automated and vulnerabilities are mainly discovered by manual analysis. In this lecture, we present recent approaches that aim at assisting a security analyst during auditing of source code -- instead of replacing her. These approaches combine different concepts of program analysis and data mining, which allows for spotting vulnerable code more effectively and efficiently. In particular, we cover the extrapolation of vulnerable code, the identification of missing checks as well as mining for vulnerabilities using graph databases. Additionally, we provide some hands-on examples using the tool Joern.

Lecturer: Fabian Yamaguchi

« Main Program

Dynamic analysis of Malicious Android Apps (3 hours)

Lecture contents will be defined soon.

Lecturer: Lorenzo Cavallaro

« Main Program

Web application security: from static analysis to dynamic protections and recovery (3 hours)

Web application security continues to be in bad shape, as shown by our own recent discovery of dozens of zero-days in a number of open source applications. In this lecture we will cover a set of complementary approaches for web application security. First, we will see static analysis tools and how they can benefit from machine learning. Second, we will cover how protection mechanisms can be embedded in the execution environment, specifically in the DMBS. Finally, we will see how it is possible to recover from intrusions that compromise the state of large scale web applications running in the cloud.

Lecturer: Miguel P. Correia

« Main Program

Empirical Validation of Risk and Security Requirements Methodologies (3 hours)

There are many risk assessment methodologies and many security requirements methods both from industry (CoBIT, ISO2700x, ISECOM's OSST), and academia (CORAS, SecureTropos, SI*, SREP etc.). Possibly there is also your own. And of course you want to know whether your own pet methodology works in practice.
To answer this question we need to ask first what does it mean in practice? The usual interpretation by researchers is the researchers tackle a real world problem. But this is just the first mile of a long road. We can explain it with an anecdote: V.M., a former air traffic controller with 30+ years of experience of evaluation of controller software, was evaluating our tool for safety patterns. He told us our software generated a Windows error message (the kind of 'error at exAB91A'). It was not an error, it was a window showing that a logical formula would not hold on his proposed safety pattern!
A methodology works in practice if it can be used

  • >> effectively and
  • >> efficiently
  • >> by somebody else beside the methodology's own inventors
  • >> on a real world problem.
Everybody can of course use any techniques on any problem with sufficient time and patience but slicing beef with a stone knife is not going to be quick and the results is surely not a fine carpaccio.
In this lecture i will briefly summarize our experience and some of the findings that we had after more that 5 years of empirical research. The number of artefacts produced by each experiment varied. Artefacts mainly consisted in hand-written graphs and flowcharts, diagrams created using the method's tools or software for slideshow presentations, and text documents containing notes written by group members. Participants were students and professionals, each with his or her own peculiarities. Since the proof of the pudding is in the eating, we will also run a little experiment ourselves during the lecture and we will discuss out to check the results, understand what happened and so on.

Lecturer: Fabio Massacci

« Main Program

The age of human hacking (3 hours)

Unconventional attacks through the human component of security.
The evolution in the modern workforces, the changes in the way of working and living and the evolution of the social networks lead to an huge increase of the amount of information circulating on the networks on each one of us. Today, we live a blended life. We experience a world where physical and virtual encounters seamlessly merge. We blend our private and professional lives due to the flexibility to work at any time from different locations. Unfortunately these changes were so fast that most of the professionals were not able to fully understand their security consequences. The deep understanding of the social presence is often still lacking, with severe security consequences sometimes. "Mainstream" security is not fitting this new paradigms unfortunately and a new approach is required. The lecture discusses security problems of social network over exposition and their impact on modern cybercrime, the evolution of the targeted attacks and the state of art and research.

  • Why people share Web 2.0, social media and social networks and the new habits
  • The age of human hacking The Modern social engineering its concepts and role in modern security
  • Modern malware and mobile terminals Malware 2.0, correct use of mobile terminals, mobile terminals security
  • Modern cybercrime Methods, economic models and future evolutions
  • The mobile world
  • Analysis of some recent attacks

Lecturer: Enrico Frumento

« Main Program

Attack simulation, Countermeasure Design and Security Testing Using Threat Modelling (3 hours)

Part I will provide attendees with an understanding of basic threat modeling process and what threat modeling entitles to as application risk analysis process.

While Part II will introduce the basic stages of a new application threat modelling process called PASTA (Process for Attack Simulation and Threat Analysis) for conducting threat analysis, attack modelling and risk management and get insights on how threats can be mitigated by design by incorporating security requirements in the SDLC for the design of security controls well as how threat modelling can be used to derive specific security and vulnerability test cases to test the effectiveness of security measures in protecting the application from specific attacks.

Lecturer: Marco Morana

« Main Program

The OWASP testing guide v4 (3 hours)

  • Introduction to Software security
  • What is the state of the Information Security today?
  • The main targets of the cyber attacks
  • Web Application Security: the OWASP Guides today
  • How to use the OWASP Testing Guide
  • How to use the testing guide in your processes
  • Automation vs. manual
  • A structured approach to software security
  • OWASP Guidelines and tools

Lecturer: Matteo Meucci

« Main Program

Machine Learning under Attack: Vulnerability Exploitation and Security Measures (3 hours)

Abstract. Learning to discriminate between secure and hostile patterns is a crucial problem for species to survive in nature. Mimetism and camouflage are well-known examples of evolving weapons and defenses in the arms race between predators and preys. It is thus clear that all of the information acquired by our senses should not be considered necessarily secure or reliable. In machine learning and pattern recognition systems, however, we have started investigating these issues only recently. This phenomenon has been especially observed in the context of adversarial settings like malware detection and spam filtering, in which data can be purposely manipulated by humans to undermine the outcome of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specifc vulnerabilities that an attacker may exploit either to mislead learning or to evade detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on learning algorithms has thus been one of the main open issues in the novel research field of adversarial machine learning, along with the design of more secure learning algorithms.

In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to evade detection. I then show how carefully-designed poisoning attacks can mislead some learning algorithms by manipulating only a small fraction of their training data. In addition, I discuss some defense mechanisms against both attacks in the context of real-world applications, including biometric identity recognition and computer security. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some promising future research directions.

Lecturer: Battista Biggio