Research

Formal Analysis of Cyber-physical Systems

Formally modelling and verifying a complex CPS is extremely challenging, can we automatically learn for verifying such systems without manual modelling?

Series of Works

  • [ETAPS/FASE 2017, STTT 2018] Conducted an empirical study on learning probabilistic models (e.g., discrete-time Markov Chains) from system traces, and proposed a new learning framework based on genetic algorithms.
  • [TSE 2018] Proposed a CEGAR-style predicate abstraction-based learning framework to push forward learning to be applicable for large-scale systems with even infinite state space.
  • [ICFEM 2017] Proposed an active learning algorithm to sample the system so that a more acurate model can be learned.
  • [FM 2018] Conducted a case study of the above learning algorithms on the verification of a real-world testbed, i.e., SWaT.
  • [DSN 2018] Proposed an importance sampling algorithm of the learned approximate probabilistic model to obtain a more accurate verification result where rare events are potentially involved.
  • [ICSE 2018, ACM SIGSOFT Distinguished Paper Award] Applied the idea of probabilistic learning to model program concolic testing, solved the optimal strategy and proposed a greedy algorithm to approximate the strategy.

Software Engineering for Dependable Artificial Intelligence Systems

AI is deeply interwined with software engineering. While AI systems can be intrinsically vulnerable/discriminative to be trusted, software engineering could help either by testing or formal verification.

Series of Works

  • [ICSE 2019] Proposed a runtime detection system framework to filter adversarial samples of deep neural networks. Compared to prior detection algorithms, it is sufficiently efficient to be applicable at runtime.
  • [ICSE 2020, ACM SIGSOFT Distinguished Paper Award] Proposed a scalable and efficient algorithm which utilized gradient to search for disciminative samples of deep neural networks, which could be used for data augmentation to improve the model fairness.
  • [ASE 2020] Proposed to interprete recurrent neural networks by extracting more tracable, reasonable and transparent models, i.e., probabilistic finite automata. The extracted models can potentially bridge the gap between black-box neural networks and existing model-based analysis methods like testing/attack/adversary detection, predictive analysis and runtime monitoring.
  • [ICECCS 2020] Conducted an empirical study on the correlation between coverage and robustness for deep neural networks, which is surpringly limited.

  • Several other works are in submission.