His primary areas of investigation include Artificial intelligence, Theoretical computer science, Artificial neural network, Abstract interpretation and Scalability. His work is dedicated to discovering how Artificial intelligence, Machine learning are connected with Data mining and other disciplines. His Theoretical computer science research integrates issues from Leverage, Conditional random field, Graphical model, Structured prediction and JavaScript.
The various areas that Martin Vechev examines in his Artificial neural network study include Algorithm, Convolutional neural network and Robustness. He interconnects Static analysis, Inference and Program specification in the investigation of issues within Abstract interpretation. As a part of the same scientific family, he mostly works in the field of Scalability, focusing on Code and, on occasion, Security analysis and Language model.
His primary scientific interests are in Theoretical computer science, Programming language, Artificial intelligence, Artificial neural network and Robustness. Parallelism is closely connected to Program analysis in his research, which is encompassed under the umbrella topic of Theoretical computer science. His work in Programming language tackles topics such as Memory model which are related to areas like Overlay and Set.
His Artificial intelligence research focuses on Machine learning and how it relates to JavaScript and Program synthesis. His work deals with themes such as Scalability and Task, which intersect with Artificial neural network. His Robustness research is multidisciplinary, incorporating elements of Adversarial system, Algorithm and Network architecture.
His primary areas of study are Robustness, Artificial neural network, Scalability, Theoretical computer science and Key. His study on Robustness is covered under Artificial intelligence. Martin Vechev works in the field of Artificial intelligence, focusing on Probabilistic logic in particular.
His Scalability study integrates concerns from other disciplines, such as Distributed computing, Leverage, Computer engineering and Range. Martin Vechev has researched Theoretical computer science in several fields, including Adversarial system, Computational geometry, Approximation algorithm and Inference. His biological study spans a wide range of topics, including Machine learning and Component.
Robustness, Artificial neural network, Theoretical computer science, Scalability and Distributed computing are his primary areas of study. His research in Robustness intersects with topics in Smoothing, Rotation, Heuristic, Interpolation and Parameterized complexity. The concepts of his Artificial neural network study are interwoven with issues in Applied mathematics and Existential quantification.
His Theoretical computer science research incorporates elements of Adversarial system, Representation, Data point, Deep learning and Artificial intelligence. He combines subjects such as Residual, Computer engineering and Key with his study of Scalability. Martin Vechev performs multidisciplinary study on Distributed computing and Certification in his works.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Code completion with statistical language models
Veselin Raychev;Martin Vechev;Eran Yahav.
programming language design and implementation (2014)
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
Timon Gehr;Matthew Mirman;Dana Drachsler-Cohen;Petar Tsankov.
ieee symposium on security and privacy (2018)
Securify: Practical Security Analysis of Smart Contracts
Petar Tsankov;Andrei Dan;Dana Drachsler-Cohen;Arthur Gervais.
computer and communications security (2018)
Predicting Program Properties from "Big Code"
Veselin Raychev;Martin Vechev;Andreas Krause.
symposium on principles of programming languages (2015)
Predicting Program Properties from "Big Code"
Veselin Raychev;Martin Vechev;Andreas Krause.
symposium on principles of programming languages (2015)
An abstract domain for certifying neural networks
Gagandeep Singh;Timon Gehr;Markus Püschel;Martin Vechev.
Proceedings of the ACM on Programming Languages (2019)
Differentiable Abstract Interpretation for Provably Robust Neural Networks
Matthew Mirman;Timon Gehr;Martin T. Vechev.
international conference on machine learning (2018)
Fast and Effective Robustness Certification
Gagandeep Singh;Timon Gehr;Matthew Mirman;Markus Püschel.
neural information processing systems (2018)
Abstraction-guided synthesis of synchronization
Martin Vechev;Eran Yahav;Greta Yorsh.
symposium on principles of programming languages (2010)
Abstraction-guided synthesis of synchronization
Martin T. Vechev;Eran Yahav;Greta Yorsh.
symposium on principles of programming languages (2010)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Technion – Israel Institute of Technology
ETH Zurich
Google (United States)
Georgia Institute of Technology
IBM (United States)
University of California, Riverside
ETH Zurich
Facebook (United States)
Ludwig-Maximilians-Universität München
ETH Zurich
The University of Texas at Austin
University of Florida Health
Nature Research Centre
Nanjing Agricultural University
University of Toronto
Universität Hamburg
University of Kansas
Wonkwang University
University of Antwerp
Wayne State University
University Hospital of Wales
Grenoble Alpes University
University of Leeds
University of Minnesota
University of Colorado Anschutz Medical Campus
University at Albany, State University of New York