Tribhuvanesh Orekondy
I am a machine learning researcher at Qualcomm AI research. Shortly before, I was PhD student at the Max Planck Institute for Informatics where I worked on Computer Vision and Machine Learning and was advised by Mario Fritz and Bernt Schiele. Previously, I graduated with a Master's degree in CS from ETH Zürich.
Email  ·  Google Scholar  ·  Github  ·  LinkedIn  ·  MPI
News
- Sep. '20: GS-WGAN accepted at NeurIPS '20
- May. '20: New tech report: InfoScrub
- Dec. '19: Prediction Poisoning accepted at ICLR '20
- Oct '19: Presenting Gradient Leaks at FL NeurIPS '19
Research
I'm broadly interested in Computer Vision and Machine Learning. During the course of my PhD, I focused on topics in trustworthy and reliable ML (adversarial ML, privacy-preserving techniques). I'm also interested in deep generative models, sample-efficient and weak-/semi-supervised learning approaches.Hui-Po Wang, Tribhuvanesh Orekondy, Mario Fritz
CVPR (Fair, Trusted, and Data Efficient Computer Vision workshop), 2021
paper  ·  bibtex
An image obfuscation network to remove privacy attribute information (such as by inverting, or maximizing uncertainty), while retaining image fidelity.
Differential Privacy Defenses and Sampling Attacks for Membership Inference
Shadi Rahimian,
Tribhuvanesh Orekondy,
Mario Fritz
AISec, 2021
paper  · 
bibtex
Differential Privacy approaches to defend against membership inference attacks.
Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz
NeurIPS, 2020
paper  ·  bibtex
A novel GAN to allow releasing sanitized forms of data with rigorous differential privacy guarantees.
Tribhuvanesh Orekondy, Bernt Schiele Mario Fritz
ICLR, 2020
paper  ·  project page  ·  bibtex
An optimization-based defense against model stealing attacks, with perturbations crafted to poison resulting gradient signals.
Tribhuvanesh Orekondy, Bernt Schiele Mario Fritz
CVPR, 2019
paper  ·  poster  ·  extended abstract (CV-COPS@CVPR)  ·  project page  ·  bibtex
Vision models encode meaningful information in predictions even on out-of-distribution natural images. We exploit this property to steal functionality of complex vision models.
Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy,
Seong Joon Oh,
Yang Zhang,
Bernt
Schiele
Mario Fritz
FL NeurIPS, 2019
paper  · 
poster  · 
talk  · 
bibtex
Gradient parameter deltas in Federated Learning encodes user bias statistics of participating devices, raising deanonymization concerns.
Connecting Pixels to Privacy and Utility: Automatic Redaction of Private
Information in Images
Tribhuvanesh Orekondy,
Mario Fritz,
Bernt
Schiele
CVPR, 2018 (Spotlight)
paper
 · 
poster
 · 
project page  · 
video  · 
bibtex
Automatic method to identify and redact a broad range of private information spanning multiple modalities in visual content.
Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images
Tribhuvanesh Orekondy,
Bernt
Schiele,
Mario Fritz
ICCV, 2017
paper
 · 
poster
 · 
extended
abstract (VSM@ICCV)
 · 
project page  · 
bibtex
An approach to understand and predict a wide spectrum of privacy risks in images.
HADES: Hierarchical Approximate Decoding for Structured Prediction
Tribhuvanesh Orekondy
(under supervision of
Martin Jaggi,
Aurelien Lucchi,
Thomas Hoffman
)
Master Thesis, 2016
paper
 · 
project page  · 
bibtex
A fast structured output learning algorithm, which works by approximately decoding oracles to various extents.
Academic Activities
- Reviewing: CVPR '19, CV-COPS '19, TPAMI '19, ICCV '20, AAAI '20, CVPR '20, ECCV '20, NeurIPS '20, IJCV '20, WACV '21, CVPR '21, ICLR '21 (Outstanding reviewer award)
- Teaching Assistant: Machine Learning in Cyber Security, 2018, 2019
- Thesis co-supervision: Shadi Rahimian (MSc., University of Saarland), Jonas Klesen (BSc., University of Saarland)