ECLiPSe is an open-source software system for the cost-effective development and deployment of constraint programming applications, e.g. in the areas of planning, scheduling, resource allocation, timetabling, transport etc.

LeanCoP is a compact automated theorem prover for classical first-order logic. It is based on the connection calculus and implemented in Prolog. leanCoP 2.1 runs on ECLiPSe Prolog (5.x), SWI-Prolog or SICStus Prolog. More information can be found here.

A modified version of Karto SLAM package for ROS Fuerte is available here.

Here you can find the high level control knowledge of BlueBotics Absolem robotic system. The domain knowledge and the control knowledge, as well as, the planning algorithms, are based on the Situation Calculus and implemented in ECLiPSe Prolog (5.x). This work is based on the project for the bachelor thesis of Pierluigi Calabria This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Part of the Prolog code, in this project, comes form the Cognitive Robotics Lab, University of Toronto.

Here you can find a graphical user interface to develop a component-based robotic system. This project is based on the QT  Developer Framework. This work has been done by Jodi Padulano for the bachelor thesis. This e-mail address is being protected from spambots. You need JavaScript enabled to view it .



Confidence driven TGV fusion

The source code corresponding to the method presented in the paper:

Confidence driven TGV fusion, V. Ntouskos, F. Pirri, arXiv:1603.09302, 2016

is available from our GitHub account here.


The synthetic dataset used in the paper is available for download here.

The disparity maps of the KITTI 2012 multi-view stereo dataset, computed based on the method "J. Zbontar and Y. LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR 2015" are available for download here.



Articulated Object Modeling

The source code corresponding to the method presented in the paper:

Component-wise Modeling of Articulated Objects, V. Ntouskos, M. Sanzari, B. Cafaro, F. Nardi, F. Natola, F. Pirri, M. Ruiz, ICCV'15.

is available from our GitHub account here.

Contacts: {ntouskos,sanzari}


In order to perform the modeling, a structure with fields matching the names of the components is required as input. Each field is a structure array with as many elements as the number of aspects used to model this specific component. Finally, each element of this array holds a structure containing the binary segmentation mask of the aspect in the respective image.

To run the program you first need to add the 'third party' folder and all its sub-folders in the Matlab path. Then you need to change the current directory to the 'scripts' folder and run the 'main' function providing as arguments the aforementioned structure and a path were the results will be stored.

Currently, for the example provided with this code, the transformations required for the registration between the aspects and the component assembling have been precomputed and are provided in the files 'RtAspects.mat' and 'RtComponents.mat' respectively. The source code for automatically estimating these transformations will be added soon.

External dependencies:
Ian Mitchell's ToolboxLS


Action Recognition

Here you can find the Matlab code for the algorithm presented in the paper "Bayesian non-parametric inference for manifold based MoCap representation” by Fabrizio Natola and Valsamis Ntouskos and Marta Sanzari and Fiora Pirri, Sapienza University of Rome, ALCOR Lab.

Contacts: {natola,ntouskos,sanzari,pirri}


Using ‘features_Computation.m’ it is possible to construct the features based on the Principal Geodesic Analysis algorithm, so as to train the model via the Dirichlet Process Mixture Model (DPMM). The features are 7-dimensional and take into account the rotational and translational part of the principal component, plus the norm of the velocity of the principal component. The features are subdivided into 6 groups according to the different parts of the skeleton, that are: the head, the left arm, the right arm, the torso, the left leg and the right leg.

In the folder named ’Action_dataset’ there are all the 18 MoCap sequences considered in the algorithm here presented (both in .amc and .bvh formats). The skeletons for the .amc format are available in the ‘skel files HDM05’ folder.
The corresponding features are subdivided into testing and training data in the folders ‘TestData’ and ‘TrainingData’.

In the folder ‘Models’, there are the models evaluated for each considered action class and the Matlab script ‘testDPMM.m’ to test the ‘TestingData’ against the ‘TrainingData’. The model for each class is structured as follows. Each ‘Model’ has the ‘name’ of the action for which it has been estimated, together with a structure called ‘group’. This latter is composed by 6 sets of parameters (one for each sub-body group) with the corresponding number of classes (clusters) estimated for each group. Each ‘’ (for i=1:6) has as main values: the number of elements belonging to each cluster and the mean and covariance of the cluster.

© 2017 Alcor
Joomla! is Free Software released under the GNU General Public License.