Controllers

A practical overview of feedback controllers, organized from simple to advanced

Click any controller for detailed formulas, pros & cons, and real-world applications

FUNDAMENTAL Single-Loop
Bang-Bang (On-Off) Controller
Output switches between two states based on error sign
Nonlinear
Proportional (P) Controller
Output proportional to error — the simplest linear controller
Linear
PID Controller
Proportional + Integral + Derivative — the workhorse of industry
Linear

Why Start Here

These controllers require no mathematical model of the plant. They operate directly on the error signal (desired minus measured) and produce a corrective output. Despite their simplicity, PID alone accounts for over 90% of industrial control loops. Most engineers never need anything beyond a well-tuned PID.
CLASSICAL Frequency-Domain
Lead Compensator
Adds phase lead near crossover to increase stability margins
Linear
Lag Compensator
Boosts low-frequency gain to reduce steady-state error
Linear
Lead-Lag Compensator
Combines lead and lag to simultaneously improve transient and steady-state response
Linear

The Frequency-Domain Toolkit

Classical compensators are designed using Bode plots, root locus, and Nyquist diagrams. They shape the open-loop transfer function to achieve desired gain margin, phase margin, and bandwidth. This is the language of control engineering textbooks and remains the standard approach when the plant can be adequately described by a transfer function.
STATE-SPACE Model-Based
Full-State Feedback (Pole Placement)
Places closed-loop poles at desired locations via state feedback gain K
Linear Model-Based
Linear Quadratic Regulator (LQR)
Optimal state feedback minimizing a quadratic cost on states and inputs
Linear Optimal Model-Based
LQG (LQR + Kalman Filter)
Optimal control with optimal estimation — the separation principle in action
Linear Optimal Model-Based

The Power of Models

State-space methods use an explicit mathematical model of the plant (matrices A, B, C, D) to compute control laws with guaranteed properties. Pole placement gives direct control over dynamics; LQR finds the best trade-off between performance and effort; LQG handles noisy measurements. These methods require the system to be controllable and (for LQG) observable.
ADVANCED Nonlinear & Robust
Sliding Mode Control
Forces state onto a sliding surface — robust to matched uncertainties
Nonlinear Robust
Model Predictive Control (MPC)
Solves an optimization problem at each timestep over a receding horizon
Optimal Model-Based
Feedback Linearization (NDI)
Cancels nonlinearities algebraically to yield a linear closed-loop system
Nonlinear Model-Based
Incremental NDI (INDI)
Sensor-based incremental correction — wraps unmodeled disturbances into measurements
Nonlinear Robust
Geometric Control
Control on Lie groups (SO(3), SE(3)) — avoids singularities of local coordinates
Nonlinear Model-Based
Backstepping
Recursively constructs a Lyapunov function for strict-feedback nonlinear systems
Nonlinear Adaptive

When Is Nonlinear Control Necessary?

Real systems are generally second-order and nonlinear. No matter the approach, a restoring force (P term) and damping (D term) are always required. So when do you need more?
  • Systems underactuated w.r.t. the desired trajectory, where the gap is too large for simple feedforward/integrator terms (e.g., aggressive quadrotor trajectories)
  • Complex and/or non-holonomic constraints (friction, contact dynamics, car-like vehicles, cost coupling over a manifold)
  • Strong unmodeled disturbances
  • Topology mismatch with the mathematical representation (see Geometric Control)

General Design Principles

The Fundamental Trade-off: Performance vs. Robustness

Every controller navigates a tension between aggressive performance (fast response, tight tracking) and robustness (tolerance of modeling errors, disturbances, noise). High gain gives fast response but amplifies noise and can destabilize uncertain plants. The Bode sensitivity integral (waterbed effect) makes this precise: reducing sensitivity at one frequency must increase it at another.

A PID tuned for blazing speed on the nominal plant will often oscillate or go unstable when deployed on a real system with unmodeled dynamics. Back off the gains.

The Internal Model Principle

To reject a disturbance or track a reference with zero steady-state error, the controller must contain an internal model of the signal it needs to handle. For step inputs, this means an integrator in the loop. For sinusoidal disturbances, a resonant pair. This is why the "I" in PID exists, and why repetitive controllers work for periodic disturbances.

A P-only controller will always have steady-state error to a step reference. Adding integral action (PI) includes the model of a step (1/s) and eliminates the error — guaranteed by the internal model principle.

Separation of Estimation and Control

For linear Gaussian systems, the optimal controller can be designed in two independent steps: first design a Kalman filter (optimal estimator), then design an LQR (optimal regulator) as if full state were available. The combined LQG controller is still optimal. This separation principle drastically simplifies design — but it does not hold for nonlinear systems or when robustness to model uncertainty is required.

LQG has famously poor robustness guarantees despite being "optimal." This motivated the development of H-infinity and mu-synthesis methods that jointly consider performance and uncertainty.

You Cannot Control What You Cannot Observe

Controllability and observability are binary prerequisites. If a state is uncontrollable, no control input can influence it. If unobservable, no sensor measurement reveals it. Before designing any controller, verify these structural properties from the (A, B) and (A, C) pairs. If the system fails either test, no amount of cleverness in the control law will compensate.

A quadrotor has 12 states (position, velocity, orientation, angular rate) but only 4 actuators (rotor speeds). It is controllable — but only because the coupling through orientation makes all 12 states reachable. Lose that coupling (e.g., a planar model with no tilt) and you lose controllability of lateral position.

Constraints Are Not Edge Cases

Real actuators saturate. Valves have limits, motors have torque bounds, control surfaces have deflection stops. A controller designed without accounting for constraints will command physically impossible inputs, causing integrator windup, loss of phase margin, or violent transients when the constraint releases. MPC handles constraints natively; for PID, anti-windup schemes are essential.

An altitude-hold PID on a drone commands full throttle during a large descent. The integral term accumulates enormous positive error. When the drone reaches the setpoint, the wound-up integrator causes a massive overshoot before it unwinds. Anti-windup clamping prevents this.

Simplicity as a Design Goal

The best controller is the simplest one that meets the specification. PID before LQR. LQR before MPC. Gain scheduling before adaptive control. Complex controllers are harder to tune, harder to verify, harder to debug when something goes wrong at 2 AM, and more sensitive to implementation details (discretization, numerical precision, timing jitter). Every layer of complexity must earn its place by solving a problem that simpler approaches cannot.

SpaceX's early Falcon 9 landing attempts used PID-based control for the final descent. The algorithm that lands orbital rockets on drone ships is not exotic — it is carefully engineered simplicity with gain scheduling for the changing dynamics.