Publications Internationales

Mezzoudj S, Behloul A, Seghir R, Saadna Y. A parallel content-based image retrieval system using spark and tachyon frameworks. Journal of King Saud University - Computer and Information Sciences. 2021.Abstract

With the huge increase of large-scale multimedia over Internet, especially images, building Content-Based Image Retrieval (CBIR) systems for large-scale images has become a big challenge. One of the drawbacks associated with CBIR is the very long execution time. In this article, we propose a fast Content-Based Image Retrieval system using Spark (CBIR-S) targeting large-scale images. Our system is composed of two steps. (i) image indexation step, in which we use MapReduce distributed model on Spark in order to speed up the indexation process. We also use a memory-centric distributed storage system, called Tachyon, to enhance the write operation (ii) image retrieving step which we speed up by using a parallel k-Nearest Neighbors (k-NN) search method based on MapReduce model implemented under Apache Spark, in addition to exploiting the cache method of spark framework. We have showed, through a wide set of experiments, the effectiveness of our approach in terms of processing time.

Soundes B, Larbi G, Samir Z. Pseudo Zernike moments-based approach for text detection and localisation from lecture videos. International Journal of Computational Science and Engineering. 2019;19 (2) :274-283.Abstract

Scene text presents challenging characteristics mainly related to acquisition circumstances and environmental changes resulting in low quality videos. In this paper, we present a scene text detection algorithm based on pseudo Zernike moments (PZMs) and stroke features from low resolution lecture videos. Algorithm mainly consists of three steps: slide detection, text detection and segmentation and non-text filtering. In lecture videos, slide region is a key object carrying almost all important information; hence slide region has to be extracted and segmented from other scene objects considered as background for later processing. Slide region detection and segmentation is done by applying pseudo Zernike moment's based on RGB frames. Text detection and extraction is performed using PZMs segmentation over V channel of HSV colour space, and then stroke feature is used to filter out non-text region and to remove false positives. The algorithm is robust to illumination, low resolution and uneven luminance from compressed videos. Effectiveness of PZM description leads to very few false positives comparing to other approached. Moreover resulting images can be used directly by OCR engines and no more processing is needed.

Mezzoudj S, Behloul A, Seghir R, Saadna Y. A parallel content-based image retrieval system using spark and tachyon frameworks. Journal of King Saud University - Computer and Information Sciences. 2019.Abstract

With the huge increase of large-scale multimedia over Internet, especially images, building Content-Based Image Retrieval (CBIR) systems for large-scale images has become a big challenge. One of the drawbacks associated with CBIR is the very long execution time. In this article, we propose a fast Content-Based Image Retrieval system using Spark (CBIR-S) targeting large-scale images. Our system is composed of two steps. (i) image indexation step, in which we use MapReduce distributed model on Spark in order to speed up the indexation process. We also use a memory-centric distributed storage system, called Tachyon, to enhance the write operation (ii) image retrieving step which we speed up by using a parallel k-Nearest Neighbors (k-NN) search method based on MapReduce model implemented under Apache Spark, in addition to exploiting the cache method of spark framework. We have showed, through a wide set of experiments, the effectiveness of our approach in terms of processing time.

Hamouid K, Adi K. Secure and reliable certification management scheme for large-scale MANETs based on a distributed anonymous authority. Peer-to-Peer Networking and Applications. 2019;12 (5) :1137–1155.Abstract

This paper proposes a compromise-tolerant (t,n)-threshold certification management scheme for MANETs. Our solution allows to mitigate the impact of compromised nodes that participate in the certification service. In our design, certification management is achieved anonymously by an Anonymous Certification Authority (ACA). The latter is fully distributed into multiple disjointed coalitions of nodes whose structure is made hidden. This prevents an adversary from taking the control of the ACA by arbitrarily compromising t or more nodes. In other words, our proposal enhances the compromise-tolerance to more than the threshold number t of nodes without breaking down the whole certification system. As a result, our scheme requires a very smaller threshold than traditional schemes, which improves considerably the service availability. The experimental study shows a clear advantage over traditional threshold-based certification schemes by ensuring a significant positive compromise between security and availability of certification service.

Belferdi W, Behloul A, Noui L. A Bayer pattern-based fragile watermarking scheme for color image tamper detection and restoration. Multidimensional Systems and Signal Processing. 2019;30 (3) :1093–1112.Abstract

The security of multimedia documents becomes an urgent need, especially with the increasing image falsifications provided by the easy access and use of image manipulation tools. Hence, usage of image authentication techniques fulfills this need. In this paper, we propose an effective self-embedding fragile watermarking scheme for color images tamper detection and restoration. To decrease the capacity of insertion, a Bayer pattern is used to reduce the color host image into a gray-level watermark, to further improve the security Torus Automorphism permutation is used to scramble the gray-level watermark. In our algorithm, three copies of the watermark are inserted over three components (R, G, and B channels) of the color host image, providing a high probability of detection accuracy and recovery if one copy is destroyed. In the tamper detection process, a majority voting technique is used to determine the legitimacy of the image and recover the tampered regions after interpolating the extracted gray-level watermark. Using our proposed method, tampering rate can achieve 25% with a high visual quality of recovered image and PSNR values greater than 34 (dB). Experimental results demonstrate that the proposed method affords three major properties: the high quality of watermarked image, the sensitive tamper detection and high localization accuracy besides the high-quality of recovered image.

Saadna Y, Behloul A, Mezzoudj S. Speed limit sign detection and recognition system using SVM and MNIST datasets. Neural Computing and Applications. 2019;31 :5005–5015.Abstract

This article presents a computer vision system for real-time detection and robust recognition of speed limit signs, specially designed for intelligent vehicles. First, a new segmentation method is proposed to segment the image, and the CHT transformation (circle hog transform) is used to detect circles. Then, a new method based on local binary patterns is proposed to filter segmented images in order to reduce false alarms. In the classification phase, a cascading architecture of two linear support vector machines is proposed. The first is trained on the GTSRB dataset to decide whether the detected region is a speed limit sign or not, and the second is trained on the MNIST dataset to recognize the sign numbers. The system achieves a classification recall of 99.81% with a precision of 99.08% on the GTSRB dataset; in addition, the system is also tested on the BTSD and STS datasets, and it achieves a classification recall of 99.39% and 98.82% with a precision of 99.05% and 98.78%, respectively, within a processing time of 11.22 ms.

Boubechal I, Rachid S, Benzid R. A Generalized and Parallelized SSIM-Based Multilevel Thresholding Algorithm. Applied Artificial Intelligence. 2019;33 (14) :1266-1289.Abstract

Multilevel thresholding is a widely used technique to perform image segmentation. It consists of dividing an input image into several distinct regions by finding the optimal thresholds according to a certain objective function. In this work, we generalize the use of the SSIM quality measure as an objective function to solve the multilevel thresholding problem using empirically tuned swarm intelligence algorithms. The experimental study we have conducted shows that our approach, producing near-exact solutions, is more effective compared to the state-of-the-art methods. Moreover, we show that the computation complexity has been significantly reduced by adopting a shared-memory parallel programming paradigm for all the algorithms we have implemented.

Saliha M, Ali B, Rachid S. Towards large-scale face-based race classification on spark framework. Multimedia Tools and Applications . 2019;78 (18) :26729–26746.Abstract

Recently, the identification of an individual race has become an important research topic in face recognition systems, especially in large-scale face images. In this paper, we propose a new large-scale race classification method which combines Local Binary Pattern (LBP) and Logistic Regression (LR) on Spark framework. LBP is used to extract features from facial images, while spark’s logistic regression is used as a classifier to improve the accuracy and speedup the classification system. The race recognition method is performed on Spark framework to process, in a parallel way, a large scale of data. The evaluation of our proposed method has been performed on two large face image datasets CAS-PEAL and Color FERET. Two major races were considered for this work, including Asian and Non-Asian races. As a result, we achieve the highest race classification accuracy (99.99%) compared to Linear SVM, Naive Bayesian (NB), Random Forest(RF), and Decision Tree (DT) Spark’s classifiers. Our method is compared against different state-of-the-art methods on race classification, the obtained results show that our approach is more efficient in terms of accuracy and processing time.

Guezouli L, Belhani H. Motion Detection of Some Geometric Shapes in Video Surveillance. American Journal of Data Mining and Knowledge Discovery. 2017;2 (1) : 8-14 .Abstract

Motion detection is a live issue. Moving objects are an important clue for smart video surveillance systems. In this work we try to detect the motion in video surveillance systems. The aim of our work is to propose solutions for the automatic detection of moving objects in real time with a surveillance camera. We are interested by objects that have some geometric shape (circle, ellipse, square, and rectangle). Proposed approaches are based on background subtraction and edge detection. Proposed algorithms mainly consist of three steps: edge detection, extracting objects with some geometric shapes and motion detection of extracted objects.

Saadna Y. An overview of traffic sign detection and classification methods. International Journal of Multimedia Information Retrieval. 2017;6 (3) :193–210.Abstract

Over the last few years, different traffic sign recognition systems were proposed. The present paper introduces an overview of some recent and efficient methods in the traffic sign detection and classification. Indeed, the main goal of detection methods is localizing regions of interest containing traffic sign, and we divide detection methods into three main categories: color-based (classified according to the color space), shape-based, and learning-based methods (including deep learning). In addition, we also divide classification methods into two categories: learning methods based on hand-crafted features (HOG, LBP, SIFT, SURF, BRISK) and deep learning methods. For easy reference, the different detection and classification methods are summarized in tables along with the different datasets. Furthermore, future research directions and recommendations are given in order to boost TSR’s performance.

Baroudi T, Seghir R, Loechner V. Optimization of Triangular and Banded Matrix Operations Using 2d-Packed Layouts. ACM Transactions on Architecture and Code Optimization (TACO). 2017;14 (4).Abstract

Over the past few years, multicore systems have become increasingly powerful and thereby very useful in high-performance computing. However, many applications, such as some linear algebra algorithms, still cannot take full advantage of these systems. This is mainly due to the shortage of optimization techniques dealing with irregular control structures. In particular, the well-known polyhedral model fails to optimize loop nests whose bounds and/or array references are not affine functions. This is more likely to occur when handling sparse matrices in their packed formats. In this article, we propose using 2d-packed layouts and simple affine transformations to enable optimization of triangular and banded matrix operations. The benefit of our proposal is shown through an experimental study over a set of linear algebra benchmarks.

Guezouli L, Azzouz I. ENHANCEMENT OF THE FUSION OF INCOMPATIBLE LISTS OF RESULTS. International Journal of Digital Information and Wireless Communications (IJDIWC) . 2016;6 (2) :78-86.Abstract

This work is located in the domain of distributed information retrieval (DIR). A simplified view of the DIR requires a multi-search in a set of collections, which forces the system to analyze results found in these collections, and merge results back before sending them to the user in a single list. Our work is to find a fusion method based on the relevance score of each result received from collections and the relevance of the local search engine of each collection, which is the main issue of our work.

Benhamouda S, Guezouli L. Selection of Relevant Servers in Distributed Information Retrieval System. International Journal of Computer and Information Engineering. 2016;10 (5).Abstract

Nowadays, the dissemination of information touches the distributed world, where selecting the relevant servers to a user request is an important problem in distributed information retrieval. During the last decade, several research studies on this issue have been launched to find optimal solutions and many approaches of collection selection have been proposed. In this paper, we propose a new collection selection approach that takes into consideration the number of documents in a collection that contains terms of the query and the weights of those terms in these documents. We tested our method and our studies show that this technique can compete with other state-of-the-art algorithms that we choose to test the performance of our approach.

Guezouli L, Essafi H. SEARCH OF INFORMATION BASED CONTENT IN SEMI-STRUCTURED DOCUMENTS USING INTERFERENCE WAVE. International Journal of Computational Science, Information Technology and Control Engineering . 2016;3 (3) :29-39.Abstract

This paper proposes a semi-structured information retrieval model based on a new method for calculation of similarity. We have developed CASISS (Calculation of Similarity of Semi-Structured documents) method to quantify how two given texts are similar. This new method identifies elements of semi-structured documents using elements descriptors. Each semi-structured document is pre-processed before the extraction of a set of descriptors for each element, which characterize the contents of elements.It can be used to increase the accuracy of the information retrieval process by taking into account not only the presence of query terms in the given document but also the topology (position continuity) of these terms.

Guezouli L, Essafi H. CAS-based information retrieval in semi-structured documents: CASISS model. Journal of Innovation in Digital Ecosystems. 2016;3 (2) :155-162.Abstract

 

This paper aims to address the assessment the similarity between documents or pieces of documents. For this purpose we have developed CASISS (CAlculation of SImilarity of Semi-Structured documents) method to quantify how two given texts are similar. The method can be employed in wide area of applications including content reuse detection which is a hot and challenging topic. It can be also used to increase the accuracy of the information retrieval process by taking into account not only the presence of query terms in the given document (Content Only search — CO) but also the topology (position continuity) of these terms (based on Content And Structure Search — CAS). Tracking the origin of the information in social media, copy right management, plagiarism detection, social media mining and monitoring, digital forensic are among other applications require tools such as CASISS to measure, with a high accuracy, the content overlap between two documents.

CASISS identify elements of semi-structured documents using elements descriptors. Each semi-structured document is pre-processed before the extraction of a set of elements descriptors, which characterize the content of the elements.