```html Research · GATE Lab · UC Davis

Energy Efficient Learning

We develop learning and training methods that reduce energy, memory traffic, and runtime cost while preserving accuracy and robustness.

What we do in GATE Lab

Training algorithms that explicitly account for efficiency constraints and deployment realities.

Why it matters

Efficient learning enables ML on edge devices and reduces training cost for large models and datasets.

Playful idea: “If the model is a car, we try to keep the speed while shrinking the fuel tank.”

Areas Covered

  • Quantization Aware Training (QAT)Training while simulating low-precision arithmetic to preserve accuracy after quantization.
  • Energy Aware TrainingAdding compute/memory cost signals to training so models learn to be efficient under real budgets.
  • Sparse Aware TrainingEncouraging structured/unstructured sparsity to reduce operations and memory movement.
  • Compiler Aided TrainingUsing compiler feedback (operator cost, scheduling, memory plans) to guide training decisions.
  • Forward Forward Learning (FFL)Alternative learning approach that can avoid backpropagation in certain settings, targeting efficiency.
  • Saliency Guided TrainingUsing importance signals to skip, prune, or downweight low-impact computation.
  • Transformer Dynamic Token ModulationIncremental sampling, dropping, extension, and compaction of tokens to match compute to input difficulty.
  • Iterative Inference (Incremental Resolution Enhancement)Progressively refining outputs only as needed, saving compute on easy inputs.

Hardware for ML

We design accelerators and HW/SW co-optimization techniques to run ML workloads efficiently under bandwidth and quantization constraints.

What we do in GATE Lab

Architecture design plus co-optimization across compiler, runtime, and memory hierarchy.

Why it matters

Specialized hardware can dramatically improve throughput and energy per inference, enabling real-time edge intelligence.

Fun fact: Avesta Sasan teaches EEC 289Q “Machine Learning Hardware” at UC Davis (graduate level), grounded in GATE Lab research.

Areas Covered

  • Custom ASIC DesignApplication-specific silicon tailored for ML workloads with strict energy and throughput targets.
  • Custom Convolution EnginesSpecialized datapaths for CNN layers that maximize reuse and minimize memory traffic.
  • Systolic Processor DesignRegular array-based compute for matrix multiplications and tensor operations with high utilization.
  • Hardware Aware ML Compiler OptimizationCompiler transformations that adapt models and schedules to the target hardware’s strengths/limits.

Hardware Security

Hardware is the root of trust. We protect hardware IP, detect malicious alterations, and build trustworthy systems across the supply chain.

What we do in GATE Lab

Attack-and-defense research using formal methods, learning-assisted analysis, and practical threat models.

Why it matters

Even perfect software cannot compensate for untrusted hardware. Security must be anchored at the hardware level.

Areas Covered

  • Hardware Trojan DetectionDetecting malicious modifications using side-channel signals, tests, or structural analysis.
  • Formal VerificationMathematically proving security/correctness properties using model checking and theorem proving.
  • SMT AttackUsing satisfiability modulo theories solvers to break protections by reasoning over constraints.
  • RANE AttackAttack methodology for assessing obfuscation/locking resilience via structural and constraint exploitation.
  • Side Channel AnalysisExtracting secrets or detecting anomalies via power, timing, EM, or other indirect signals.
  • Aged and Out-of-Spec IC DetectionIdentifying recycled/aged parts via low-cost tests and learning-assisted signatures.
  • Learning from TestUsing manufacturing/validation test data to infer defects, reliability risk, or malicious behavior.
  • IP Protection via Logic ObfuscationHiding functionality behind keys/transformations to deter reverse engineering.
  • Supply Chain SecurityEnsuring trust across fabrication/assembly/distribution, including counterfeit and tamper detection.
  • Physical Unclonable Function (PUF)Device-unique signatures from manufacturing variation for authentication and key generation.
  • Machine Learning SecurityDefending ML models from adversarial, poisoning, extraction, and backdoor threats.
  • Explainable AI (XAI)Interpreting ML decisions to improve trust, diagnose failures, and support security auditing.
  • ML-assisted Malware DetectionDetecting malware using learning over microarchitectural/hardware-event signatures and anomalies.

ML for HW Design

We apply machine learning to improve logical and physical hardware design flows, accelerating decisions and improving QoR.

What we do in GATE Lab

Learning models that predict, guide, or optimize design choices across EDA stages.

Why it matters

Design spaces are huge. ML can learn patterns from past designs/data to make flows faster and more predictable.

Areas Covered

  • ML Assisted Timing AnalysisPredicting timing/QoR outcomes quickly to guide optimization earlier in the flow.
  • ML Assisted Yield PredictionEstimating yield and failure risk using design/manufacturing signals to inform tradeoffs.
  • Learning from Manufacturing TestMining test data to diagnose issues and predict reliability/yield.
  • Learning Assisted Clock Tree SynthesisUsing ML guidance for CTS decisions to improve skew/power and reduce iteration time.
Design flow joke: “If EDA is chess, ML is the opening book—fewer bad moves, faster progress.”

Tools from GATE Lab

Open-source, CAD-integrated tooling for evaluating and breaking logic locking / obfuscation mechanisms.

Available Tools

  • SMT Attack (Satisfiability Modulo Theory Attack) A superset of SAT-based deobfuscation that augments a SAT solver with one or more theory solvers. This enables modeling non-Boolean properties (e.g., timing/delay constraints) and launching stronger attacks not possible with pure SAT formulations. The framework supports multiple modes (SAT-reduction, eager SMT, lazy SMT) and includes AccSMT for speedups and approximate attacks against SMT-hard locking schemes.

    Tool access: cadforassurance.org — SMT Attack
  • RANE Attack (Reversal Assessment of Netlist Encryption) An open-source CAD-based toolbox for evaluating logic locking mechanisms with a unique interface that uses formal verification tools directly—without translations or simplifications. RANE accepts library-dependent, standard HDL (written/elaborated/synthesized Verilog), often performs better than existing attacks, and supports advanced case studies such as FSM locking (e.g., HARPOON) where the key is a sequence of input patterns rather than a static bit-vector.

    Tool access: cadforassurance.org — RANE

Research Sponsors

Research in the GATE Lab is sponsored by:

```