Adaptive Control of an Actuated Ankle Foot Orthosis

 

Adaptive Control of an Actuated Ankle Foot Orthosis

In this demonstrator, a model reference adaptive control with saturated proportional derivative (PD) action for an active ankle foot orthosis (AAFO) to assist the gait of paretic patients, is studied. Unlike most classical model-based controllers, the proposed controller does not require any prior estimation of the system’s model parameters. The AAFO system is actively driven by the residual human torque delivered by muscles spanning the ankle joint and the AAFO’s actuator’s torque. The ankle reference trajectory is updated online based on the self-selected walking speed of the wearer. The input-to-state stability of the AAFO-wearer system with respect to a bounded human muscular torque is proved in closed-loop based on a Lyapunov analysis. Experimental results, obtained with one healthy subject and one paretic patient, showed satisfactory results in terms of tracking performance and ankle joint assistance throughout the whole gait cycle.

References:
[1] V. Arnez-Paniagua, H. Rifai, Y. Amirat, M. Ghedira, J-M Gracies, and S. Mohammed, “ Modified Adaptive Control of an Actuated Ankle Foot Orthosis to assist Paretic Patients", IEEE/RSJ IROS 2018, Accepted

[2] V. Arnez-Paniagua, H. Rifai, S. Mohammed, Y. Amirat, "Adaptive Control of an Actuated Ankle Foot Orthosis for Foot-Drop Correction", Proceeding of the IFAC world congress, Toulouse, pp.1384-1389, 2017

Assistance of Daily Living Activities using a Lower Limb Exoskeleton

 

Assistance of Daily Living Activities using a Lower Limb Exoskeleton

Gait modes, such as level walking, stair ascent/descent and ramp ascent/descent, show different lower-limb kinematic and kinetic characteristics. Therefore, an accurate detection of these modes is critical for a wearable robot to provide appropriate power assistance. In this demostrator, a fast gait mode detection method based on a body sensor system is shown. A fuzzy logic algorithm is used to estimate the likelihoods of gait modes in real-time. Since the proposed fast gait mode detection makes it possible to select appropriate kinematic and kinetic models for each gait mode, assistive torques required for assisting the human motions can be obtained more naturally and immediately. The proposed methods are all verified by experiments with a lower-limb exoskeletal assistive robot with transparent actuation by series elastic actuators, called the Exoskeletal Robotic Orthosis for Walking Assistance (EROWA). Four healthy subjects participated in the experiments. All subjects were asked to perform different gait modes using their normal and simulated abnormal gaits, i.e., blocking the knee joint of one leg during walking. Latency and success rate of gait mode detection are selected as performance criteria. The effectiveness of the proposed gait mode based assistive strategy is evaluated using EMG muscular activities. The below video shows in details the demonstrator.

References:
[1] W. Huo, S. Mohammed, Y. Amirat, K. Kong, "Fast Gait Mode Detection and Assistive Torque Control of an Exoskeletal Robotic Orthosis for Walking Assistance (E-ROWA)", IEEE Transactions on Robotics, DOI
<10.1109/TRO.2018. 2830367>, pp. 1-18, 2018.

[2] W. Huo, S. Mohammed, Y. Amirat, K. Kong, "Active Impedance Control of a Lower Limb Exoskeleton to Assist Sit-to-Stand Movement," in Proc. Of the IEEE International Conference on Robotics and Automation, ICRA 2016, Stockholm, Sweden, 2016, pp. 3530-3536.

Assistance-as-Needed based approach to assist movements of knee joint flexion-extension movements

 

Assistance-as-Needed based approach to assist movements of knee joint flexion-extension movements

This demonstrator shows a new approach to control a wearable knee joint exoskeleton driven through the wearer’s intention. A realistic bio-inspired musculoskeletal knee joint model is used to control the exoskeleton. This model considers changes in muscle length and joint moment arms as well as the dynamics of muscle activation and muscle contraction during lower limb movements. Identification of the model parameters is done through an unconstrained optimization problem formulation. A control law strategy based on the principle of assistance as needed is proposed. This approach guarantees asymptotic stability of the knee joint orthosis and adaptation to human-orthosis interaction. Moreover, the proposed control law is robust with respect to external disturbances. As shown in the below video, experimental validations are conducted online with healthy subject during flexion and extension of their knee joint. The proposed control strategy has shown satisfactory performances in terms of tracking trajectory and adaptation to human tasks completion.


References:
[1] W. Hassani, S. Mohammed, H. Rifai, Y. Amirat, "Powered orthosis for lower limb movements assistance and rehabilitation", Control Engineering Practice, Elsevier, vol. 26, no. 5, pp. 245- 253, 2014.

[2] W. Hassani, S. Mohammed, Y. Amirat, "Real-Time EMG driven Lower Limb Actuated Orthosis for Assistance As Needed Movement Strategy," in the 2013 Robotics: Science and Systems conference, RSS 2013, Berlin, Germany, Jun. 2013, pp.
http://roboticsproceedings.org/rss09/p54.pdf

Prototype AAFO

 

AAFO (Actuated Ankle-Foot Orthosis)

The Active Ankle Foot Orthosis (AAFO) is attached with straps to the subject’s left calf and thigh, as shown in the figure below. The orthosis has an active rotational degree of freedom (DoF) at the ankle joint (driven by a DC motor with a gearbox with a gear ratio of 114.4:1 and a maximum output torque of 15 Nm) and one passive rotational DoF at the knee joint. The total mass of the AAFO is 3.5 Kg , but only 1.5 Kg (the motor and gearbox) are attached to the shank while the rest (the electronics and battery) is securely fastened to the waist. The AAFO is equipped with an incremental encoder to measure the ankle joint angle at a sampling rate of 1 KHz . Two inertial measurement units (IMU) are used to estimate the angle between the shank and the vertical axis as well as the translational accelerations at the ankle level in the horizontal and vertical directions. Three force sensitive resistors (FSR) are embedded in both left and right insoles (a total of six FSR) and are connected to a wireless system. All data are time normalized to 100% of the gait cycle.

Actuated Ankle Foot Orthosis (AAFO)

References:
[1] V. Arnez-Paniagua, H. Rifai, Y. Amirat, M. Ghedira, J-M Gracies, and S. Mohammed, “ Modified Adaptive Control of an Actuated Ankle Foot Orthosis to assist Paretic Patients", IEEE/RSJ IROS 2018, Accepted

[2] V. Arnez-Paniagua, H. Rifaï, Y. Amirat, S. Mohammed, "Adaptive control of an actuated-ankle-foot-orthosis", IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1584-1589, 2017

[3] V. Arnez-Paniagua, H. Rifai, S. Mohammed, Y. Amirat, "Adaptive Control of an Actuated Ankle Foot Orthosis for Foot-Drop Correction", Proceeding of the IFAC world congress, Toulouse, pp.1384-1389, 2017

Prototype Angelegs

 

Angelegs (Advanced Lower Limb exoskeleton for walking assistance)

ANGELEGS is a light wearable robot with nonresistant actuator technology. It is an improved version of the E-ROWA. Improvements of the new ANGELEGS concerns mainly, the design, the actuators, the autonomy of the robot, the algorithms that are more adapted to be changed to suit different needs. This robot is specifically designed for personal use and rehabilitation, and the certification for the use of this robot is pending.

Lower limb exoskeleton ANGELEGS

DAVIA

 

Dataset VIdeo Annotation (DAVIA)

DatAset VIdeo Annotator (Davia) is a video based application that helps to annotate sensor data related to human activities of daily living (ADL). It is developed in Java-Swing and requires VLC Media Player libraries.

Fig.1. DAVIA GUI

Users can display either one or two video streams in parallel to sensors data. Displaying sensors’ data allows to check visually if the sensors have recorded data corresponding to the given motion.
A special button to play faster or slower the video enables a refined annotation of complex motions. Davia allows also users to produce activity annotations as well as their contextual attributes in CSV files. Every label will be assigned a start and an end time stamp corresponding to the duration of corresponding video sequence. Labels will be based on the reference date time that is set in the beginning of the labeling process. Configuration files are used to define labels of Activities and Context. Special subprograms are run with the tool for enabling sensor’s data synchronization before the annotation process.

UBISTRUCT Dataset Factory

 

UBISTRUCT Dataset Factory

It is a software platform composed of sensing agents that can do real-time collection and transmission of sensor data to a central cloud consolidation system. These sensing agents are implemented by using the Ubsitruct Middleware API. The sensors' data are transmitted by using the HTTP protocol (an in the future MQTT protocol) and consolidated in a private cloud platform by using the Bull Big Data Capabilities Framework (BDCF). BDCF provides set of software components (called capabilities) to build Realtime Big Data Analytics Applications. The BDCF framework  provide a PaaS services to a dedicated private cloud hardware, which allows creating VMs, composing topologies of components dedicated for online Data Processing and Machine Learning. These components are available in the BDCF catalogue and used for specifying topologies through Alien4Cloud.

The BDCF private cloud hardware is hosted in Atos (Bull) French facilities and made available to the LISSI laboratory, in the context of the European project ITEA3 Medolution, in order to store the collected datasets in big dependable databases and develop machine learning algorithms by using the BDCF framework. This hardware delivers high memory, compute and storage capabilities, thanks to a cluster architecture providing a small data-centre for Cloud/Big Data, including a mix of standard x86 servers, large In-Memory Servers (Bullion 4/8TB up to 16 sockets), as well as different types of storage systems (> 100 TB). It includes also an OpenStack IaaS, Liberty version, to virtualize this hardware.

The Dataset Factory agents allows to read data from the Ubistruct living lab sensors such as:

  • Xsens' MVN motion tracking sensors that are tailored for building efficient 3D models of human motion.
  • Wavetrend RFID Sensors for tracking objects and humans
  • Cleode Ambient Sensors that capture daily context: door opening/closing, power consumption, luminosity, water leak and  presence detection.
  • Infra Red Sensors
  • Activity trackers  such as Fitbit and Withings
  • Vital Signs Monitoring Sensors such as Mysignal or iHealth Labs

 

Smart-rules

 

Smart-rules

The Internet of Robotic Things (IoRT) is an emerging vision that brings together pervasive sensors and objects with robotic and autonomous systems. The Smart-rules framework, previously called SembySem, is the result of the research activities that were undertaken with some partners in the context of the European project SEMbySEM. The Smart-rules framework allows IoRT operators  to easily setup and reuse context monitoring rules and handle different reactive strategies according to the captured contexts coping with different levels of perception of the real world.  The context monitoring logic is based on a production rules language designed specifically for enabling the generation of actions when some contextual conditions hold. These conditions are semantically specified by using in a dedicated ontology language, called µ-Concept.

Smart-rules is particularly fitting for IoRT systems that need to perform reactive, online reasoning, as it uses unique name assumption and closed-world reasoning. It also includes a series of constructs that facilitate the handling of incomplete information due to failures in perception.

Architecture of the Smart-rules platform (old-name Sembysem).

Smart-rules is also an operational platform that allows different IoRT operators to easily set up and handle different real-world heterogeneous entities, which are called manageable objects (MOs). An MO is any physical or virtual object that is manipulated by an IoT system. It includes sensors and actuators, as well as people, home objects, doors, robots and rooms. MOs are represented within the system by interfaces which abstract access to the MO’s state and operations. The architecture of the Smart-rules operational platform, illustrated in the figure above, is composed of two main layers: (1) the reasoning core layer that handles monitoring operations and reasoning, and (2) the façade layer for the interaction with the real-world sensing and actuation devices. The communication is based on java messaging middleware that enables some sorts of asynchronous communications between the two layers. The façade layer abstract the complexity and heterogeneity of the manageable objects.  The reasoning core layer is a run-time reasoning system for managing the context logic independently from the complexity and heterogeneity of a IoRT environment.



















References:
[1] L. Sabri, S. Bouznad, S. Fiorini, A. Chibani, E. Prestes, and Y. Amirat, "An integrated semantic framework for designing context-aware Internet of Robotic Things systems," Integrated Computer-Aided Engineering, IOS Press, vol. 25, no. 2, pp. 137-156, 2018.<hal-01699863>.

[2] S. Bouznad, A. Chibani, Y. Amirat, L. Sabri, E. Prestes, F. Sebbak, and S. Fiorini, "Context-Aware Monitoring Agents for Ambient Assisted Living Applications," in Proc. Of the 13th European Conference on Ambient Intelligence, AmI 2017, Malaga, Spain, 2017, pp. 225-240.<hal-01539387> .

[3] A. Chibani, A. Bikakis, T. Patkos, Y. Amirat, S. Bouznad, N. Ayari, and L. Sabri, "Using Cognitive Ubiquitous Robots for Assisting Dependent People in Smart Spaces," in Intelligent Assistive Robots- Recent advances in assistive robotics for everyday activities, S. Mohammed and J. C. Moreno and K. Kong and Y. Amirat Eds, Springer Tracts on Advanced Robotics (STAR) series, 2015, pp. 297-316. <hal-01540976> .

[4] L. Sabri, A. Chibani, Y. Amirat, G. P. Zarri, and P. Gatellier, "Semantic framework for context-aware monitoring of AAL ecosystems," in Ambient Assisted Living, N. M. Garcia and J. Rodrigues and D. C. Elias and M. S. Dias and Eds. Eds, Taylor and Francis / CRC Press, 2015, pp. 573-602.<hal-01540977>.

O-Smart NKRL

 

O-Smart NKRL

One of the most important issues in the ambient intelligence is creating a unified architecture to integrate and manage context information from heterogeneous information sources such as environmental web services, sensors, robots and humans, specifically the integration at the representation level. O-Smart NKRL is a cognitive architecture proposed as a framework for designing and operating assistive agents. This architecture is composed of three main layers.

At the low level, the communication and verbalization layer is composed of modules for interfacing with smart perception systems that can capture relevant context data such as object in the scene, RFID Tags, Beacons, Human Speech, etc.

At the middle level, the Knowledge representation layer offers a service to convert context data into narrative semantic knowledge represented and vice versa semantic knowledge into NL verbalisation or an action that can be executed by an actuator or a robot in the real-world. The semantic knowledge is represented by using the knowledge representation language (NKRL), which is a uniform representation tool based on two main ontologies to describe semantically both "static" and "dynamic" entities.

• HClass Ontology: is used to describe static entities. It is a taxonomy of entities (concepts and individuals) linked by the is-subclass-of or is-instance-of relation similarly to any OWL ontology.
• HTemp Ontology : is used to describe static entities corresponding to any state, situation, event, interaction or change. These are narrative information describing what is happening in the environment. The HTemp ontology provides a set of generic templates for representing these dynamic entities. These templates are defined by using the notions of ”conceptual predicate” and ”functional roles” see references bellow for more details.

The representation service create NKRL predicates occurrences and save them in a shared knowledge base, which can be queried by agents using NKRL queries. We denote four kinds of predicatives occurences.
• Spatio-Temporal Descriptions of the entities populating the ambient environment (human, robots, devices, etc).
• Event and Fluents to describe what is going on in the environment such as state changes, actions, activities, emotions, etc.
• Human and Robotic Agents dialogues
• Actions

At the high-level, the reasoning layer offers the following reasoning services:
• Inference of spatial and temporal relations between entities.
• Inference of the narrative context by analysing and linking relevent implicit relations between predicative occurrences.
• Question Answering.
• Actions triggering based on preferences.
• Capability matching.

These services are implemented by using two kinds of NKRL inference rules:
NKRL transformation or hypothesis rules. These rules are processed by the NKRL Unification Filtering Module and Production Inference Engine. The FUM operates on predicative occurrences stored in the knowledge base by means of search patterns.

References:

H. Abdelkawy, N. Ayari, A. Chibani, Y. Amirat, and F. Attal, "Deep HMResNet Model for Human Activity-Aware Robotic Systems," in Proc. of the AAAI 2018 Fall Symposium Series, Arlington, United States, Oct. 2018. .

H. Abdelkawy, S. Fiorini, A. Chibani, N. Ayari, and Y. Amirat, "Deep CNN and Probabilistic DL Reasoning for Contextual Affordances," in Proc. of the AAAI 2018 Fall Symposium Series, Arlington, United States, Oct. 2018. .

N. Ayari, A. Chibani, Y. Amirat, and G. Fried, "Contextual Knowledge Representation and Reasoning Models for Autonomous Robots," in Proc. of the AAAI 2017 Fall Symposium Series, Arlington, United States, Nov. 2017, pp. 246-253. .

N. Ayari, H. Abdelkawy, A. Chibani, and Y. Amirat, "Towards Semantic Multimodal Emotion Recognition for Enhancing Assistive Services in Ubiquitous Robotics," in Proc. Of the AAAI 2017 Fall Symposium Series, Arlington, United States, Nov. 2017, pp. 2-9. .

N. Ayari, A. Chibani, Y. Amirat, and E. Matson, "A Semantic Approach for Enhancing Assistive Services in ubiquitous robotics," Robotics and Autonomous Systems, Elsevier, vol. 75, pp. 17-27, 2016. .

A. Chibani, A. Bikakis, T. Patkos, Y. Amirat, S. Bouznad, N. Ayari, and L. Sabri, "Using Cognitive Ubiquitous Robots for Assisting Dependent People in Smart Spaces," in Intelligent Assistive Robots- Recent advances in assistive robotics for everyday activities, S. Mohammed and J. C. Moreno and K. Kong and Y. Amirat Eds, Springer Tracts on Advanced Robotics (STAR) series, 2015, pp. 297-316. .

N. Ayari, A. Chibani, and Y. Amirat, "Semantic Management of Human-Robot Interaction In Ambient Intelligence using N-ary ontologies," in ICRA 2013, Karlsruhe, Germany, May. 2013, pp. 1164-1171. .

N. Ayari, A. Chibani, and Y. Amirat, "A Semantic Approach to Enhance Human-Robot Interaction in AmI Environments," in Human-Agent Interaction, Workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura, Portugal, 2012. .

L. Sabri, A. Chibani, Y. Amirat, and G. P. Zarri, "Narrative reasoning for cognitive ubiquitous robots," in Knowledge Representation for Autonomous Robots, Workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2011, San Fransisco, United States, 2011. .

L. Sabri, A. Chibani, Y. Amirat, and G. P. Zarri, "Semantic and architectural approach for Spatio-Temporal Reasoning in Ambient Assisted Living," in 22th International Joint Conference on Artificial Intelligence, IJCAI 2011, Barcelona, Spain, 2011, pp. 77-84. .

Parkinson Rehabilitation Dataset

 

Parkinson Rehabilitation Dataset

Dataset Information:

Due to fast growing of elderly population, importance of health care assistant is getting more noticeable. Rehabilitation plays a major role in the treatment plan of patients, especially for Parkinsonian. Machine learning can facilitate the rehabilitation, by observing patient’s activity, coaching and correcting him/her to follow up the rehabilitation plan properly. This dataset represents set of exercises in form of different sessions, which will be useful to design a dominant e-assistant for patients. The dataset contains performance of 7 subjects with different profile (in terms of age, height, gender). It includes motion tracking and video of subjects while they follow the video coach on screen. Subjects wear 5 Xsens motion tracking sensors for arms, legs and chest and are on record with 3 Kinect cameras of different views. Each subject made 5 repetitions of the predefined sequence of exercises.

Figure 1. Xsens positions on subject’s body

Labeling of the dataset is done manually using Anvil software and validated in 2 levels (consistency of data and labels). Each exercise consists of some sub-exercises and the transitions between exercises are considered as the breaks. Besides, there are group of labels for describing the level of correctness which can be used for ontological analysis. In summary, our labeling outputs has the labels for:
- Exercises
- Sub-exercises
- Correctness of hands movement(right, left)
- Correctness of legs movement(right, left)

Figure 2 shows the Anvil tool used for labeling.

Figure 2. Labeling Tool

Attribute Information:

Dataset includes 5 Xsens and 3 Kinect sensors. Table 1, explains each attribute of Xsens output file. The Xsens output is recorded with frequency of 60 Hz.

Table 1. Xsens data specification

For any questions and/or comments on this dataset, please contact us by filling the form below