Tribhuvanesh Orekondy

I am a machine learning researcher at Qualcomm AI research in Amsterdam. Shortly before, I was PhD student at the Max Planck Institute for Informatics where I worked on Computer Vision and Machine Learning and was advised by Mario Fritz and Bernt Schiele. Previously, I graduated with a Master's degree in CS from ETH Zürich.

Email  ·  Google Scholar  ·  Github  ·  LinkedIn  ·  MPI



I'm broadly interested in Computer Vision and Machine Learning. During the course of my PhD, I focused on topics in trustworthy and reliable ML (adversarial ML, privacy-preserving techniques). I'm also interested in sample-efficient and weak-/semi-supervised learning approaches.
GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators
Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz
NeurIPS, 2020
paper  ·  bibtex

A novel GAN to allow releasing sanitized forms of data with rigorous differential privacy guarantees.

InfoScrub: Towards Attribute Privacy by Targeted Obfuscation
Hui-Po Wang, Tribhuvanesh Orekondy, Mario Fritz
CVPR (Fair, Trusted, and Data Efficient Computer Vision workshop), 2021
paper  ·  bibtex

An image obfuscation network to remove privacy attribute information (such as by inverting, or maximizing uncertainty), while retaining image fidelity.

Prediction Poisoning
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy, Bernt Schiele Mario Fritz
ICLR, 2020
paper  ·  project page  ·  bibtex

An optimization-based defense against model stealing attacks, with perturbations crafted to poison resulting gradient signals.

Knockoff Nets: Stealing Functionality of Black-Box Models
Tribhuvanesh Orekondy, Bernt Schiele Mario Fritz
CVPR, 2019
paper  ·  poster  ·  extended abstract (CV-COPS@CVPR)  ·  project page  ·  bibtex

Vision models encode meaningful information in predictions even on out-of-distribution natural images. We exploit this property to steal functionality of complex vision models.


Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele Mario Fritz
FL NeurIPS, 2019
paper  ·  poster  ·  talk  ·  bibtex

Gradient parameter deltas in Federated Learning encodes user bias statistics of participating devices, raising deanonymization concerns.

Membership Inference

Differential Privacy Defenses and Sampling Attacks for Membership Inference
Shadi Rahimian, Tribhuvanesh Orekondy, Mario Fritz
PriML NeurIPS, 2019
paper  ·  bibtex

Differential Privacy approaches to defend against membership inference attacks.

Visual Redactions

Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images
Tribhuvanesh Orekondy, Mario Fritz, Bernt Schiele
CVPR, 2018 (Spotlight)
paper  ·  poster  ·  project page  ·  video  ·  bibtex

Automatic method to identify and redact a broad range of private information spanning multiple modalities in visual content.


Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images
Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz
ICCV, 2017
paper  ·  poster  ·  extended abstract (VSM@ICCV)  ·  project page  ·  bibtex

An approach to understand and predict a wide spectrum of privacy risks in images.


HADES: Hierarchical Approximate Decoding for Structured Prediction
Tribhuvanesh Orekondy (under supervision of Martin Jaggi, Aurelien Lucchi, Thomas Hoffman )
Master Thesis, 2016
paper  ·  project page  ·  bibtex

A fast structured output learning algorithm, which works by approximately decoding oracles to various extents.

Academic Activities

  • Reviewing: CVPR '19, CV-COPS '19, TPAMI '19, ICCV '20, AAAI '20, CVPR '20, ECCV '20, NeurIPS '20, IJCV '20, WACV '21, CVPR '21, ICLR '21 (Outstanding reviewer award)
  • Teaching Assistant: Machine Learning in Cyber Security, 2018, 2019
  • Thesis co-supervision: Shadi Rahimian (MSc., University of Saarland), Jonas Klesen (BSc., University of Saarland)