Thesis Defense of Oussama Bey.

Oussama Bey, a PhD candidate in the SIRIUS team, will defend his thesis on January 27 2026, in the RT Amphitheater at the UPEC campus in Vitry-sur-Seine, located at 120 rue Paul Armangot, 94400 Vitry-sur-Seine.

Title : Robust and Adaptive Assistive Control Approaches for an Active Ankle–Foot Orthosis

 

Thesis director : Yacine Amirat

General overview :

"Lower-limb exoskeletons represent a major breakthrough in assistive robotics, aiming to restore or enhance human motor capabilities by providing assistive torques at the hip, knee, and ankle joints. Among these exoskeletons, the Actuated Ankle–Foot Orthosis (AAFO) plays a key role in supporting dorsiflexion and plantarflexion movements of the foot, helping correct the foot-drop phenomenon and ensuring proper foot clearance. The success of such devices depends on advanced control approaches that ensure accurate trajectory tracking, robustness to parametric uncertainties and external disturbances, and rapid detection of the wearer’s locomotion mode (walking context) to provide timely context-aware assistance across scenarios such as level walking, stair ascent/descent, and ramp ascent/descent. This thesis develops advanced control approaches for an AAFO under the assist-as-needed paradigm, ensuring that only the necessary level of assistance is provided to the user."

Research Talk Announcement

Title: Empowering Research with Open Research Knowledge Graph (ORKG)
Date & Time: December 18th, 2024, at 5:00 PM

Speaker: Dr. Sanju Tiwari

Short Profile:
Dr. Sanju Tiwari, CEO and Founder of ShodhGuru Research Labs (India), is a Professor at Sharda University, India, and a Senior Researcher at TIB Hannover, Germany. She was awarded the prestigious DAAD Post-Doc-Net AI Fellowship in Germany for 2021, during which she visited various German universities. Prior to this, she worked as a postdoctoral researcher at the Ontology Engineering Group, Universidad Politécnica de Madrid, Spain, in 2019.

Dr. Tiwari is the User Board Chair of ORKG at TIB Hannover and co-authored the first book on ORKG alongside Prof. Sören Auer and his team. She has served as a leading organizer for workshops and conferences at renowned international events, such as ESWC, SEMANTiCS, KGSWC, and WWW.

A Google Summer of Code (GSoC) mentor at DBpedia (2022–2024), Dr. Tiwari is also a member of InfAI, Leipzig University, Germany. She initiated the "DBpedia Chapter in Hindi" project and has visited seven countries to conduct various research activities.

HDR deffence of Ghazaleh Khodabandelou

Ghazaleh Khodabandelou, Associate Professor at the University of Paris Est Créteil, will defend her Habilitation to Supervise Research (HDR) on the theme "Computational Intelligence and Context Recognition in Complex Systems." The defense will take place on June 14 at 2:30 PM in the amphitheater of the Networks & Telecommunications department in Vitry.

The defense will take place in front of the jury members:

Reviewers:

  • Prof. Ricardo Baeza-Yates, Northeastern University, Silicon Valley, USA
  • Prof. Adriana Tapus, ENSTA, Institut Polytechnique Paris
  • Prof. Younès Bennani, University of Sorbonne Paris Nord

Jury Members:

  • Prof. Jérôme Boudy, Télécom SudParis
  • Prof. Dominique Vaufreydaz, University Grenoble Alpes
  • Prof. Faïcel Chamroukhi, University of Caen, IRT SystemX
  • Prof. Latifa Oukhellou, University Gustave Eiffel
  • Prof. Yacine Amirat, University Paris-Est Créteil

Thesis Defense of Sylvain Guinebert.

Thesis Defense of Sylvain Guinebert

Sylvain Guinebert, a PhD candidate from the SIMO team, will defend his thesis on 2022-03-30 in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.

Title: Research and Development: AI-Assisted Interpretation of Spinal Pathologies

Thesis Supervisor(s): Yacine Amirat

Abstract:

Objective: To develop a tool for the automatic segmentation and identification of lumbar discopathies and fractures in MRI images using CNN networks.

Materials and Methods: We developed a PACS prototype with a DICOM viewer aimed at extracting training data and two CNN networks, one dedicated to segmentation and the other to the analysis of simple lumbar pathologies. A total of 204 MRIs were selected from the Pasteur 2 University Hospital in NICE. After segmenting and classifying all structures of interest, we trained two neural networks (U-Net++ and Yolov5x) to segment and detect discs and vertebrae.

Results: The neural networks provided semantic segmentations with high precision, achieving DICE scores of 0.96 and 0.93 for intervertebral discs and vertebral bodies, respectively. However, they were less effective in detecting common pathologies (degenerative disc diseases, disc herniations, vertebral fractures), with an area under the sensitivity-specificity curve of 0.85 for fractures and 0.76 for degenerative discopathies.

Conclusion: Our work demonstrated good efficiency in segmenting vertebral bodies and intervertebral discs, though improvements are still needed for detecting discal pathologies and vertebral fractures. We believe that the lower performance in these areas can be attributed to a lack of training images.

Keywords: artificial intelligence, AI, deep learning, magnetic resonance imaging, MRI, spinal pathologies.

Thesis Defense of Abhishek Djeachandrane.

Thesis Defense of Abhishek Djeachandrane

Abhishek Djeachandrane, a PhD candidate from the CIR team, will defend his thesis on December 19, 2023, in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.

Title: Implementation of an Intelligent Decision-Support System Self-Regulated by Application-Centric Quality of Experience

Thesis Supervisor(s): MELLOUK Abdelhamid

Abstract:

In smart cities, video surveillance is an essential tool for ensuring public safety. In the past, security was enhanced by installing more cameras and centralizing their control. However, as the number of cameras increased, it became impossible for humans to manually monitor all footage in real time. Humans are prone to distraction and cannot maintain focus for extended periods. To address this challenge, experts have developed models capable of automatically detecting abnormal situations by analyzing video data and classifying it as normal or abnormal. These computer vision techniques have significantly advanced anomaly detection. However, due to the constantly changing and evolving nature of environments, conventional methods may not be sufficient to meet all the requirements of a real-world scenario. A literature review of "end-to-end urban video surveillance systems for asymmetric threats" was conducted to explore the subject in depth.

To address this issue, a development platform was meticulously designed, incorporating three fundamental strategies. First, it uses a corrective signal that considers exogenous, endogenous, and human factors in the surrounding context, known as "task-specific quality of experience." Second, it promises predictive systems based on machine learning and situational awareness to enhance system capabilities and performance outcomes. Modular approaches to customized learning schemes were explored, converging on a solution called "similarity-based reinforcement meta-learning" for multi-instance anomaly detection. Finally, the study recommends the adoption of self-managing systems that rely on autonomous computing principles for configuration, protection, and learning, based on machine learning, descriptive and inferential statistics, and control theory.

Together, these strategies provide a comprehensive and robust framework to address crucial questions using cutting-edge technologies and methodologies. By combining data enrichment, situational awareness, and autonomous computing, the final system effectively meets the needs of modern enterprises, for which the ability to learn, infer, and adapt quickly is as vital as the ability to be aware of the surrounding context.

Thesis Defense of Mouhamad Almakhour

Thesis Defense of Mouhamad Almakhour

Mouhamad Almakhour, a PhD candidate from the CIR team, defended his thesis on October 4, 2023, in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.

Title: Reliable Collaboration Platform Based on Distributed Ledgers

Thesis Supervisor(s): MELLOUK Abdelhamid

Abstract:

The emergence of new technologies such as 5G and IoT has led to the creation of new business models that primarily rely on what is known as open or dynamic collaboration as a key concept. In this context, a business process is defined as a group of activities carried out by different actors to achieve business objectives. Today, global market competition and changing conditions make collaboration in business processes between organizations always necessary. This collaboration often involves unknown partners who need to exchange a large amount of data, which presents serious challenges, such as security breaches and unauthorized access, among others.

As a result, companies have started seeking new decentralized, secure, and reliable environments that can implement, verify, and enforce agreements related to collaborative business processes in a transparent manner. Since 2015, a technology called blockchain has transformed the approach to collaboration. Blockchain has successfully addressed many traditional issues by providing a trustworthy, immutable, secure, and decentralized environment. Additionally, it offers self-executing codes known as "smart contracts," where the terms of an agreement between contractual partners are directly encoded.

Consequently, many businesses have begun integrating blockchain into their business processes, leading to the emergence of a new concept called "Collaborative Business Processes Based on Composite Smart Contracts." Composite smart contracts represent one or more smart contracts that must call other contracts belonging to the same or different business process owners to carry out specific tasks. However, new security risks have emerged with applications based on smart contracts. In blockchain, security breaches and vulnerabilities in any contract can result in significant financial loss. Thus, the formal verification of composite smart contracts is essential to ensure the security of collaboration.

To this end, we propose a new approach for verifying the security of Ethereum composite smart contracts in collaborative business process applications. In this work, we introduce a new framework based on formal verification techniques to verify both static and dynamic composite smart contracts, considering general security properties, context-dependent properties, and those relying on external contract calls. Our proposal utilizes finite state machine models, temporal logic formulas, and the model checking method.

To demonstrate the proposed approach, a proof of concept (PoC) with various use cases of composite smart contracts was implemented to test our proposal. First, we began with a standard motivating example, a "Travel Agency System" based on static composite smart contracts. Then, we provided a secure end-to-end platform based on dynamic composite smart contracts for Network Function Virtualization (NFV) to orchestrate and manage the lifecycle of virtual functions. The results obtained validated the effectiveness of our approach, avoiding many issues related to security, privacy, integrity, access control, non-repudiation, and more.

Thesis Defense of Koussaila Moulouel

Thesis Defense of Koussaila Moulouel

Koussaila Moulouel, a PhD candidate from the SIRIUS team, defended his thesis on May 19, 2023, in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.

Title: Hybrid AI Approaches for Context Recognition: Application to Activity Recognition and Anticipation, and Management of Context Anomalies in Ambient Intelligence Environments

Thesis Supervisor(s): Yacine Amirat

Abstract:

Ambient Intelligence (AmI) systems aim to provide users with assistive services that improve their quality of life in terms of autonomy, safety, and well-being. Designing AmI systems capable of accurate, fine, and consistent recognition of the spatial and/or temporal context of users—considering the uncertainty and partial observability of AmI environments—presents several challenges for better adapting assistance services to the user’s context. This thesis proposes a set of contributions addressing these challenges.

First, a descriptive and narrative ontology of context is proposed to model contextual knowledge in AmI environments. The purpose of this ontology is to model the user's context, considering various context attributes, and to define the commonsense reasoning axioms necessary to deduce and update the user's context. Unlike state-of-the-art ontologies, the proposed context ontology includes (i) a TBox representing the core domain ontology defined by concepts and relations, (ii) an ABox of propositional formulas corresponding to instances of context attributes, and (iii) an RBox, represented by an ASP logic program, consisting of rule models such as event effect specification, triggered event specification, context component aggregation, and detection and assistance action planning. The TBox, ABox, and RBox form the foundation of the frameworks developed in this thesis, playing a crucial role in enhancing user context recognition.

The second contribution is a hybrid ontology-based framework that combines commonsense probabilistic reasoning and probabilistic planning to recognize user context, particularly context anomalies, and provide context-aware assistive services in the presence of uncertainty and partial observability in environments. This framework leverages predictions of context attributes, such as user activity and location, provided by deep learning models. In this framework, the commonsense probabilistic reasoning is based on the proposed context ontology to define the axiomatization of context inference and planning under uncertainty. Probabilistic planning is used to characterize abnormal context by addressing the incompleteness of contextual knowledge due to the partial observability of AmI environments. Moreover, probabilistic planning allows for adapting the assistive services provided to the user based on their context. The proposed framework was evaluated using transformers and CNN-LSTM models on the Orange4Home and SIMADL datasets. The results demonstrate the framework’s effectiveness in recognizing user contexts, such as user activity and location, as well as context anomalies in uncertain and partially observable environments.

Third, a hybrid framework combining deep learning and probabilistic reasoning for anticipating human activities based on egocentric videos is proposed. The probabilistic commonsense reasoning used in this framework is based on abductive reasoning to anticipate atomic and composite human activities, and on temporal reasoning to capture changes in context attributes. Deep learning models, namely YOLOv5 and ResNet, were used to recognize context attributes such as objects, human hands, and people’s locations. The context ontology is used to model relationships between atomic and composite activities. The evaluation of the framework shows its ability to anticipate composite activities over a time horizon of a few minutes, unlike state-of-the-art approaches that can only anticipate atomic activities over a time horizon of a few seconds. It also demonstrated strong performance in terms of accuracy in classifying anticipated activities and computational time.

Finally, a stream-based reasoning framework is proposed for anticipating atomic and composite human activities based on streams of contextual attribute data collected on-the-fly. Deep learning models YOLOv7 and ResNet were employed to recognize contextual attributes such as objects used in activities, hands, and user locations. The stream-based reasoning system performs causal, abductive, and temporal reasoning using contextual knowledge obtained in real time. Dynamic effect axioms were introduced to anticipate composite activities that may be subject to unforeseen events, such as the skipping or delaying of an atomic activity. The proposed framework was validated through experiments conducted in a kitchen environment. The high performance in terms of the number of activity anticipations demonstrates the framework’s ability to leverage past contextual knowledge needed to anticipate composite activities. Its performance in terms of contextual knowledge inference time indicates that the framework is well-suited for real-world applications.

Thesis Defense of Rola El Saleh

Thesis Defense of Rola El Saleh

Rola El Saleh, a PhD candidate from the SYNAPSE team, defended her thesis on December 16, 2021, in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.

Title: Biometrics for Face Skin Analysis Using Machine Learning-Based Approaches

Abstract:

The emergence of artificial intelligence (AI), access to large databases, and the availability of supercomputers have undeniably revolutionized various fields. In particular, the development of machine learning (ML) algorithms, especially deep learning (DL), has greatly benefited the biomedical domain. In the context of dermatology, numerous studies have been conducted to automatically analyze skin images to predict diseases and monitor their progression over time.

This thesis proposes a computer-aided diagnostic system based on DL approaches that analyzes facial images and identifies potential facial diseases using only facial phenotypes, without region of interest extraction. This medical facial biometrics relies on the use of pre-trained convolutional neural networks (CNNs) such as VGG-16, EfficientNet B0, and Inception V3, which are fine-tuned to create new models tailored for classifying facial skin images into eight distinct pathologies: acne, actinic keratosis, angioedema, blepharitis, eczema, melasma, rosacea, and vitiligo.

To achieve this, a transfer learning method is utilized. Specifically, the original architectures of the three models are modified by adding new layers at the top. The proposed algorithms are trained and validated on a database specifically created for this purpose. The models are tested and evaluated under varying acquisition conditions (facial pose, lighting, image resolution, etc.). The results obtained are very promising, demonstrating the effectiveness of the proposed approach in accurately diagnosing facial skin diseases.

Thesis Defense of Randa Mallat

Thesis Defense of Randa Mallat

Randa Mallat, a PhD candidate from the SIRIUS team, will defend her thesis on January 28, 2021, in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.
Title: Toward an affordable multi-modal motion capture system framework for human kinematics and kinetics assessment

Abstract:
The quantification of human motor activities requires the measurement and estimation of kinematic and dynamic variables as accurately as possible. Human motion analysis has a wide range of applications in the fields of functional rehabilitation, orthopedics, sports, assistive robotics, and industrial ergonomics. Current motion analysis systems generally refer to stereophotogrammetric systems and laboratory force platforms, which are accurate but also expensive, requiring expert skills and are not portable. Recently, the use of low-cost sensors for estimating human motion, such as inertial measurement units and RGB cameras, has been the subject of numerous studies. Despite their great potential for use outside the laboratory, these systems still suffer from limited accuracy, primarily due to the inherent drift of inertial sensors and occlusions when using cameras, making precise estimation of joint kinematics and dynamics difficult to guarantee. These restrictions may explain why such systems are rarely used in clinical applications or for home rehabilitation. In this context, this thesis aims to develop a new low-cost motion analysis system enabling precise estimation of the 3D state of human joints. Unlike previous studies based solely on visual or inertial sensors, the proposed approach focuses on the combination of data from newly designed visual-inertial sensors. The system also utilizes practical calibration methods that require no external equipment. The sensor data is combined in a constrained extended Kalman filter that considers the biomechanics of the human body and the tasks performed to improve kinematic estimation. This is achieved by incorporating rigid body constraints, joint limits, and modeling the temporal evolution of joint trajectories or inertial sensor drift. The system's ability to estimate 3D joint kinematics was validated through the analysis of several daily arm activities as well as treadmill gait analysis. Two prototypes with different numbers and configurations of sensors were studied. Experiments conducted with several healthy subjects showed very satisfactory results compared to a reference stereophotogrammetric system. Overall, the root mean square error obtained was less than 4 degrees. This system was also used to identify the dynamic parameters of the lower limbs of a human-exoskeleton system. An evaluation system was proposed to select an optimal dynamic model of the human-exoskeleton system that provides the best compromise between the accuracy of the estimated joint torques and the model's simplicity. In this context, the proposed system aims to quantify the independent contribution of kinematic and dynamic parameters in estimating joint torque, as well as the effect of relative motion between the joint axes of the exoskeleton and the wearer. An evaluation was conducted on a knee assistive orthosis during flexion/extension movements. The results led to the proposal of a minimal model of the human-orthosis system.

Thesis Defense of Yu Su

Thesis Defense of Yu Su

Yu Su, a PhD candidate from the SYNAPSE team, will defend her thesis on November 11, 2019, in the RT Amphitheater at the Vitry-sur-Seine campus of UPEC—120 rue Paul Armangot, 94400 Vitry-sur-Seine.
Title: A bio-inspired smart perception system based on human’s cognitive auditory skills

Abstract:
Developing a machine capable of conscious perception of its environment, alongside and with humans, is one of the goals of bio-inspired artificial intelligence (BIA). The AI and BIA research communities generally agree that adding an artificial capability that produces a kind of "awareness" or "conscious" processing of information by a machine would lead to technology that is much more powerful and advanced than that based on conventional AI.

Hearing is one of the main sensory systems of the human cognitive system. The ears transform the myriad of stimuli perceived from the ambient environment into signals (nerve impulses) generated by different types of nerve cells, and this occurs at all times, even while we sleep. In fact, alongside vision, the auditory system constitutes a fundamental sense of perception in humans. Motivated by the importance of auditory complementarity in human perception and its characterization of the surrounding environment, and given the current limitations in simulating the human cognitive auditory mechanism, the primary objective of this doctoral work is to provide machines with artificial cognitive auditory capabilities that give them an augmented and adapted perception of the environment, similar to that developed in humans.

To achieve this goal, a study of the latest research covering auditory attention models, environmental sound classification techniques, deep learning-based methods, and human auditory response mechanisms was conducted to better understand the state of the art and the complexity of achieving the objectives of this doctoral work. This study highlighted the inherent shortcomings of existing techniques and directed our investigations toward modeling bio-inspired mechanisms for detecting auditory deviance. These models were associated with convolutional neural networks (CNNs) to categorize detected sounds in the environment by exploiting a knowledge-based system.

Subsequently, the work led to the implementation of a model for detecting auditory deviance using both temporal and spatial characteristics of the perceived sound (temporal and spatial domains). An approach for extracting such characteristics was proposed. Thus, these characteristics contribute to detecting deviance and auditory salience in each domain (i.e., temporal domain and spatial domain) to be combined later to enhance the detection and categorization of the perceived sound from the real environment (i.e., the final output). Experimental results demonstrate the viability of the proposed model for detecting salient deviant sounds in an audio clip, as well as the robustness and accuracy of the proposed models.

Finally, the work resulted in the development of a powerful model for detecting and characterizing environmental sounds, derived from the fusion of two 4-layer CNNs. The two types of aggregated acoustic features proposed and evaluated in Chapter 4 were used to train each CNN separately. The fusion occurs at the "softmax" values of the two CNN models. Experimental results revealed exceptional performance in detecting and classifying sound events: 97.2% obtained on the UrbanSound8K dataset, which is 4.2% higher than the most effective methods in the field.