Skip to content

Congratulations to Osama Fuad Abdel Aal for smoothly passing his Ph.D. proposal defence!

November 21, 2025

Congratulations to Osama Fuad Abdel Aal for passing his Ph.D. proposal defence!

On 11/20/2025 Thursday 2-4:30pm, Mr. Osama Fuad Abdel Aal successfully passed his PH.D. proposal defence! The committee members are unanimous with special praise to his originality, mathematical rigor and visual and logical presentation, and timing control. Congratulations Osama! All MESA Lab Ph.D. graduate members in town attended the public section of his defence. 

Web: https://mechatronics.ucmerced.edu/ct4ml

Title: The Interplay Between Control Theory and Optimization: Control Theory For Machine Learning

Abstract: Increasing integration of data-driven intelligence and automation has transformed the modern landscape of control and optimization, demanding frameworks that are simultaneously adaptive, interpretable, and robust. This thesis explores the interplay between control theory and optimization algorithms—two fields that, despite being historically distinct, share deep conceptual and methodological connections. By weaving together recent advances in machine learning, digital twins, and robust control analysis, the proposed research aims to establish principled approaches to both (i) learning for control, where optimization-based learning enhances the design and performance of controllers, and (ii) control for learning, where control-theoretic tools provide rigorous foundations for analyzing and improving optimization algorithms that underpin modern machine learning and artificial intelligence (AI).

The first dimension, Learning for Control, addresses the challenges of designing intelligent controllers in complex, uncertain, and dynamic environments. A central theme is the data-driven discovery of control laws, where system dynamics and feedback mechanisms are inferred directly from data rather than exclusively from first-principle models. Building on this foundation, the thesis investigates self-optimizing control (SOC) as a paradigm for smart control engineering. SOC leverages high performance real-time optimization algorithms—both gradient-based and derivative-free—to adjust low-level control parameters or setpoints dynamically, subject to periodic tasks and performance specifications. Complementary strategies such as run-to-run control and iterative learning control (ILC) will be explored as repetitive frameworks that naturally align with SOC objectives. Crucially, the role of digital twins is emphasized: by enabling real-time analytics, exhaustive scenario testing, and continuous system updates, digital twins provide the informational backbone for SOC. This creates a hierarchical structure where SOC serves as the upper layer of intelligence, translating insights from digital twins and industrial AI analytics into adaptive, robust, and "smart" control actions. The thesis contributes novel architectures and algorithms that close the loop between data, optimization, and control, ensuring responsiveness, reliability, and efficiency in next-generation engineering systems.

The second dimension, Control for Learning, reverses the perspective by leveraging control theory to advance the theory and practice of optimization—the computational engine at the heart of machine learning and AI. Optimization algorithms can be interpreted as dynamical systems, and their behavior can thus be analyzed and shaped using control-theoretic principles. Key tools include Lyapunov-based methods to establish and certify convergence guarantees, as well as techniques for accelerating convergence through structured feedback design. Building on these insights, the thesis investigates the design of optimization algorithms with finite-time or fixed-time convergence, inspired by non-asymptotic control strategies. Dissipativity theory offers another promising lens, enabling the characterization of energy-like quantities in optimization dynamics to inform both stability analysis and novel algorithmic architectures. Furthermore, robust control perspectives will be applied to examine convergence under uncertainty, noise, and model misspecification, thereby enhancing the resilience of learning algorithms. This dual focus—on analysis and design—positions control theory not merely as a tool for understanding existing algorithms but as a systematic foundation for the discovery of new classes of optimization methods with provable and tunable properties.

Taken together, the proposed work highlights a bidirectional synthesis between control theory and optimization. On one side, learning and optimization empower controllers to adapt in real-time, guided by data and enabled by digital infrastructure such as digital twins. On the other hand, control-theoretic methodologies ground optimization algorithms in a rigorous dynamical framework, enabling the design of faster, more robust, and theoretically principled learning methods. This thesis argues that bridging these two perspectives is essential to advancing both fields: it offers a pathway toward smart, self-optimizing control systems in engineering practice, while simultaneously contributing to control-inspired optimization at the core of AI. The anticipated outcomes include new theoretical frameworks, algorithmic innovations, and practical case studies, ultimately advancing the vision of systems that are not only controlled intelligently but that also learn and optimize intelligently within their environments.

An additional dimension explored in this thesis is the integration of fractional calculus into the analysis and design of optimization algorithms. Fractional-order operators, by extending differentiation and integration to non-integer orders, capture memory and hereditary properties that conventional integer-order dynamics cannot. This thesis proposes novel fractional gradient descent schemes, where fractional derivatives replace or augment classical gradients, potentially reshaping optimization trajectories. Such approaches are expected to enhance performance in several respects: improving exploration in nonconvex landscapes by smoothing oscillations, accelerating convergence near critical points, and offering greater robustness against noise and irregular objective geometries. By embedding fractional-order dynamics into optimization, the work aims to provide a principled mechanism for balancing exploration and exploitation, thereby extending the frontier of control-inspired optimization algorithmic design.

Keywords: Learning for Control, Control Theory for Machine Learning, Optimization Algorithms, Fractional Calculus, Data-Driven Control, Self-Optimizing Control.

Thesis proposal (frontmatter) PDF file 


Created and last updated by Prof. YangQuan Chen. 11/21/2025.