```html
We develop learning and training methods that reduce energy, memory traffic, and runtime cost while preserving accuracy and robustness.
Training algorithms that explicitly account for efficiency constraints and deployment realities.
Efficient learning enables ML on edge devices and reduces training cost for large models and datasets.
We design accelerators and HW/SW co-optimization techniques to run ML workloads efficiently under bandwidth and quantization constraints.
Architecture design plus co-optimization across compiler, runtime, and memory hierarchy.
Specialized hardware can dramatically improve throughput and energy per inference, enabling real-time edge intelligence.
Hardware is the root of trust. We protect hardware IP, detect malicious alterations, and build trustworthy systems across the supply chain.
Attack-and-defense research using formal methods, learning-assisted analysis, and practical threat models.
Even perfect software cannot compensate for untrusted hardware. Security must be anchored at the hardware level.
We apply machine learning to improve logical and physical hardware design flows, accelerating decisions and improving QoR.
Learning models that predict, guide, or optimize design choices across EDA stages.
Design spaces are huge. ML can learn patterns from past designs/data to make flows faster and more predictable.
Open-source, CAD-integrated tooling for evaluating and breaking logic locking / obfuscation mechanisms.
Research in the GATE Lab is sponsored by:




