Case Study
AI-Driven Design of Flexible Micro-Grids
Cesare Caputo, Michel-Alexandre Cardin, Pudong Ge, Fei Teng, Anna Korre, Ehecatl Antonio del Rio Chanona
This study presents a Deep Reinforcement Learning (DRL) approach for designing flexible mobile micro-grids. The framework integrates energy system modelling, simulation, and real options principles to optimise planning under uncertain demand, cost, and resource availability, demonstrating improved adaptability, reduced risk, and higher expected performance compared with static designs.

image by eaivey @ Adobe Stock
Read More
Motivation
The rapid deployment of renewable and distributed energy systems calls for flexible micro-grid architectures capable of adapting to unpredictable environments—especially in temporary or mobile applications such as disaster relief, military operations, or remote communities. Traditional optimisation methods struggle with the high dimensionality and stochastic nature of these systems. The authors were motivated to explore AI-driven decision-making to learn adaptive design and operation policies that maximise performance while maintaining robustness under uncertainty.
Methodologies
- Deep Reinforcement Learning (DRL): Used to learn adaptive decision strategies for grid configuration and operation under uncertain demand and renewable output.
- Simulation-Based Design: Modelled multiple mobile micro-grid scenarios with variable loads and distributed energy resources (DERs).
- Real Options Analysis (ROA): Quantified the economic value of flexible reconfiguration and expansion decisions over time.
- Monte Carlo Simulation: Assessed stochastic variations in resource availability, cost, and demand.
- Sensitivity Analysis: Explored the influence of training horizon, reward shaping, and network parameters on learning stability and convergence.
Insights
- Improved Adaptability: DRL-enabled planning allowed real-time adaptation to changes in demand and renewable generation.
- Economic Benefit: Flexible micro-grids achieved up to 20–35 % higher expected NPV than fixed planning approaches.
- Resilience: The system maintained operational stability even under extreme uncertainty and partial information scenarios.
- Design Integration: Combining DRL with real options created a hybrid AI-economic framework for resilient, flexible infrastructure planning.
- Scalability: The method generalises to multi-agent settings, enabling coordination among multiple mobile energy units.
Training
Relevant lectures and skills:
- Deep Reinforcement Learning
- Real Options Analysis
- Monte Carlo Simulation
- Agent-Based Decision Modelling
- Sensitivity Analysis




