## Publications

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the Publisher.

My publications maintained by DBLP and in ORCID.

If you are interested in some of my talks, here you have all slides!

PhD Thesis

• [2013,phdthesis] bibtex
R. J. Rodríguez, Performance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets, PhD Thesis , 2013.
@PhdThesis{RJRodriguez-PhD-Thesis, Title = {{Performance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets}},   Author = {Ricardo J. Rodr\'{i}guez},   School = {Dpto. de Informática e Ingeniería de Sistemas, Universidad de Zaragoza},   Year = {2013},   Month = {June},   Abstract = {A critical system must fulfil its mission despite the presence of security issues. These systems are usually deployed in heterogeneous environments, where they are subject to suffer security issues, such as intrusion attempts, confidential data theft or other type of attacks. Systems usually need to be redesigned after a security disaster, which can lead to severe consequences, such as the huge cost of reimplementing or redeploying all the system, as well as economic losses. Security has to be conceived as an integral part of the development process and as a singular need of what the system should perform (i.e., a non-functional requirement). Thus, when designing critical systems it is fundamental to study the attacks that may occur and plan how to react to them, in order to keep fulfilling the system functional and non-functional requirements. Despite considering security issues, it is also necessary to consider the costs incurred to guarantee a certain security level in critical systems. In fact, security costs can be very relevant and may span along different dimensions, such as budgeting, performance and reliability. Many of these critical systems that incorporate Fault-Tolerant (FT) techniques to deal with security issues are complex systems using resources that are compromised (i.e., they fail) by the activation of faults. These systems can be naturally modelled as Discrete Event Systems (DES) where resources are shared, also called Resource Allocation Systems (RAS). In this dissertation, we focus on FT systems using shared resources modelled as Petri nets (PNs) as formal model language. These systems are usually so large that make the exact computation of their performance a highly complex computational task, due to the well-known state explosion problem. As a result, a task that requires an exhaustive state space exploration becomes unachievable in reasonable time for large systems. The main contribution of this dissertation is threefold. Firstly, we provide different models, expressed by means of the Unified Modelling Language (UML) and Petri nets (PNs), to bring security and FT issues into foreground while designing, then allowing the analysis of security-performance trade-off. Secondly, we provide several algorithms to compute the performance (also performability -- i.e., performance under failure conditions) by means of upper throughput bounds, then avoiding the state space explosion problem. Lastly, we provide algorithms to compensate the throughput degradation produced by an unexpected situation in a FT system.},   ISSN = {2254-7606},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RJRodriguez-PhD-Thesis.pdf} }

Journal Articles

• [2018,article] bibtex
B. Liu, X. Chang, Z. Han, K. Trivedi, and R. J. Rodrguez, Model-based Sensitivity Analysis of IaaS Cloud Availability, Future Generation Computer Systems, vol. 83, pp. 1-13, 2018.
@Article{LCHTR-FGCS-18,   author = {Bo Liu and Xiaolin Chang and Zhen Han and Kishor Trivedi and Ricardo J. Rodrguez},   title = {{Model-based Sensitivity Analysis of IaaS Cloud Availability}},   journal = {Future Generation Computer Systems},   year = {2018},   volume = {83},   pages = {1--13},   month = jun, abstract = {The increasing shift of various critical services towards Infrastructure-as-a-Service (IaaS) cloud data centers (CDCs) creates a need for analyzing CDCs' availability, which is affected by various factors including repair policy and system parameters. This paper aims to apply analytical modeling and sensitivity analysis techniques to investigate the impact of these factors on the availability of a large-scale IaaS CDC, which (1) consists of active and two kinds of standby physical machines (PMs), (2) allows PM moving among active and two kinds of standby PM pools, and (3) allows active and two kinds of standby PMs to have different mean repair times. Two repair policies are considered: (P1) all pools share a repair station and (P2) each pool uses its own repair station. We develop monolithic availability models for each repair policy by using Stochastic Reward Nets and also develop the corresponding scalable two-level models in order to overcome the monolithic models limitations, caused by the large-scale feature of a CDC and the complicated interactions among CDC components. We also explore how to apply differential sensitivity analysis technique to conduct parametric sensitivity analysis in the case of interacting submodels. Numerical results of monolithic models and simulation results are used to verify the approximate accuracy of interacting sub-models, which are further applied to examine the sensitivity of the large-scale CDC availability with respect to repair policy and system parameters.},   doi = {10.1016/j.future.2017.12.062},   url = {http://webdiis.unizar.es/~ricardo/files/papers/LCHTR-FGCS-18.pdf},   }
• [2018,article] bibtex
E. Gómez-Martínez, R. J. Rodríguez, C. B. Earle, L. E. Elorza, and M. I. Rezaba, A Methodology for Model-based Verification of Safety Contracts and Performance Requirements, Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability, vol. 232, iss. 3, pp. 227-247, 2018.
@Article{GREIB-JRR-18,   author = {Elena G\'{o}mez-Mart\'{i}nez and Ricardo J. Rodr\'{i}guez and Clara Benac Earle and Leire Etxeberria Elorza and Miren Illarramendi Rezaba},   title = {{A Methodology for Model-based Verification of Safety Contracts and Performance Requirements}},   journal = {Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability},   year = {2018},   volume = {232},   number = {3},   pages = {227--247},   abstract = {The verification of safety requirements becomes crucial in critical systems where human lives depend on their correct functioning. Formal methods have often been advocated as necessary to ensure the reliability of software systems, albeit with a considerable effort. In any case, such an effort is cost-effective when verifying safety-critical systems. Often, safety requirements are expressed using safety contracts, in terms of assumptions and guarantees. To facilitate the adoption of formal methods in the safety-critical software industry, we propose a methodology based on well-known modelling languages such as UML and OCL. UML is used to model the software system while OCL is used to express the system safety contracts within UML. In the proposed methodology a UML model enriched with OCL constraints is transformed to a Petri net model that enables to formally verify such safety contracts. The methodology is evaluated on an industrial case study. The proposed approach allows an early safety verification to be performed, which increases the confidence of software engineers while designing the system.},   doi = {10.1177/1748006X16667328},   owner = {ricardo},   timestamp = {2015.10.16},   url = {http://webdiis.unizar.es/~ricardo/files/papers/GREIB-JRR-18.pdf},   }
• [2018,article] bibtex
R. J. Rodrguez, S. Bernardi, and A. Zimmermann, An Evaluation Framework for Comparative Analysis of Generalized Stochastic Petri Net Simulation Techniques, IEEE Transactions on Systems, Man, and Cybernetics: Systems, pp. 1-11, 2018.
@Article{RBZ-SMC-Sys-18,   author = {R. J. Rodrguez and S. Bernardi and A. Zimmermann},   title = {{An Evaluation Framework for Comparative Analysis of Generalized Stochastic Petri Net Simulation Techniques}},   journal = {IEEE Transactions on Systems, Man, and Cybernetics: Systems},   year = {2018},   pages = {1-11},   issn = {2168-2216},   abstract = {Availability of a common, shared benchmark to provide repeatable, quantifiable, and comparable results is an added value for any scientific community. International consortia provide benchmarks in a wide range of domains, being normally used by industry, vendors, and researchers for evaluating their software products. In this regard, a benchmark of untimed Petri net models was developed to be used in a yearly software competition driven by the Petri net community. However, to the best of our knowledge there is not a similar benchmark to evaluate solution techniques for Petri nets with timing extensions. In this paper, we propose an evaluation framework for the comparative analysis of generalized stochastic Petri nets (GSPNs) simulation techniques. Although we focus on simulation techniques, our framework provides a baseline for a comparative analysis of different GSPN solvers (e.g., simulators, numerical solvers, or other techniques). The evaluation framework encompasses a set of 50 GSPN models including test cases and case studies from the literature, and a set of evaluation guidelines for the comparative analysis. In order to show the applicability of the proposed framework, we carry out a comparative analysis of steady-state simulators implemented in three academic software tools, namely, GreatSPN, PeabraiN, and TimeNET. The results allow us to validate the trustfulness of these academic software tools, as well as to point out potential problems and algorithmic optimization opportunities.},   doi = {10.1109/TSMC.2018.2837643},   keywords = {Analytical models;Benchmark testing;Guidelines;Indexes;Petri nets;Stochastic processes;Tools;Benchmarking;generalized stochastic Petri nets (GSPNs);performance;simulation software},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RBZ-SMC-Sys-18.pdf},   }
• [2018,article] bibtex
X. Chang, T. Wang, R. J. Rodrguez, and Z. Zhang, Modeling and Analysis of High Availability Techniques in a Virtualized System, The Computer Journal, vol. 61, iss. 2, pp. 180-198, 2018.
@Article{CWRZ-COMPJ-18,   author = {Xiaolin Chang and Tianju Wang and Ricardo J. Rodrguez and Zhenjiang Zhang},   title = {{Modeling and Analysis of High Availability Techniques in a Virtualized System}},   journal = {The Computer Journal},   year = {2018},   volume = {61},   number = {2},   pages = {180--198},   month = feb, abstract = {Availability evaluation of a virtualized system is critical to the wide deployment of cloud computing services. Time-based, prediction-based rejuvenation of virtual machines (VM) and virtual machine monitors (VMM), VM failover, and live VM migration are common high-availability (HA) techniques in a virtualized system. This paper investigates the effect of combination of these availability techniques on VM availability in a virtualized system where various software and hardware failures may occur. For each combination, we construct analytic models rejuvenation mechanisms improve VM availability; (2) prediction-based rejuvenation enhances VM availability much more than time-based VM rejuvenation when prediction successful probability is above 70%, regardless failover and/or live VM migration are also deployed; (3) failover mechanism outperforms live VM migration, although they can work together for higher availability of VM. In addition, they can combine with software rejuvenation mechanisms for even higher availability; (4) and time interval setting is critical to a time-based rejuvenation mechanism. These analytic results provide guidelines for deploying and parameter setting of HA techniques in a virtualized system.},   doi = {10.1093/comjnl/bxx049},   url = {http://webdiis.unizar.es/~ricardo/files/papers/CWRZ-COMPJ-18.pdf},   }
• [2018,article] bibtex
R. J. Rodríguez, R. Tolosana-Calasanz, and O. F. Rana, A Dynamic Data-Throttling Approach to Minimize Workflow Imbalance, ACM Transactions on Internet Technology, pp. 1-22, 2018.
@Article{RTR-TOIT-18,   author = {Ricardo J. Rodr\'{i}guez and Rafael Tolosana-Calasanz and Omer F. Rana},   title = {{A Dynamic Data-Throttling Approach to Minimize Workflow Imbalance}},   journal = {ACM Transactions on Internet Technology},   year = {2018},   pages = {1--22},   note = {Accepted for publication. To appear.},   abstract = {Scientific workflows enable scientists to undertake analysis on large datasets and perform complex scientific simulations. These workflows are often mapped onto distributed and parallel computational infrastructures to speed up their executions. Prior to its execution, a workflow structure may suffer transformations to accommodate the computing infrastructures, normally involving task clustering and partitioning. However, these transformations may cause workflow imbalance because of the difference between execution task times (runtime imbalance) or because of unconsidered data dependencies that lead to data locality issues (data imbalance). In this paper, in order to mitigate these imbalances, we enhance the workflow lifecycle process in use by introducing a workflow imbalance phase that quantifies workflow imbalance after the transformations. Our technique is based on structural analysis of Petri nets, obtained by model transformation of a data-intensive workflow, and Linear Programming techniques. Our analysis can be used to assist workflow practitioners in finding more efficient ways of transforming and scheduling their workflows. Moreover, based on our analysis, we also propose a technique to mitigate workflow imbalance by data throttling. Our approach is based on autonomic computing principles that determine how data transmission must be throttled throughout workflow jobs. Our autonomic data-throttling approach mainly monitors the execution of the workflow and recompute data-throttling values when certain watchpoints are reached and time derivation is observed. We validate our approach by a formal proof and by simulations along with the Montage workflow. Our findings show that a dynamic data-throttling approach is feasible, does not introduce a significant overhead, and minimizes the usage of input buffers and network bandwidth.},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RTR-TOIT-18.pdf},   }
• [2017,article] bibtex
R. J. Rodríguez, Evolution and Characterization of Point-of-Sale RAM Scraping Malware, Journal in Computer Virology and Hacking Techniques, vol. 13, iss. 3, pp. 179-192, 2017.
@Article{R-CVHT-17,   author = {Ricardo J. Rodr\'{i}guez},   title = {{Evolution and Characterization of Point-of-Sale RAM Scraping Malware}},   journal = {Journal in Computer Virology and Hacking Techniques},   year = {2017},   volume = {13},   number = {3},   pages = {179--192},   month = aug, abstract = {Credit and debit cards are becoming the primary payment method for purchases. These payments are normally performed in merchant's in-store systems as known as Point-of-Sale (POS) systems. Since these systems handle payment card data while processing the customer transactions, they are becoming a primary target for cybercriminals. These data, when remain at memory, are scraped and exfiltrated by specially crafted malicious software named POS RAM scraping malware. In recent years, large data breaches occurred in well-known US retail companies were caused by this kind of malware. In this paper, we study the features of these malware based on their behavior on different stages: infection and persistence, process and data of interest search, and exfiltration. Then, we classify samples of 22 known POS RAM scraping malware families from 2009 to 2015 according to these features. Our findings show these malware are still immature and use well-defined behavioral patterns for data acquirement and exfiltration, which may make their malicious activity easily detectable by process and network monitoring tools.},   doi = {10.1007/s11416-016-0280-4},   issn = {2263-8733},   url = {http://webdiis.unizar.es/~ricardo/files/papers/R-CVHT-17.pdf} }
• [2017,article] bibtex
R. J. Rodríguez and J. C. Garcia-Escartin, Security Assessment of the Spanish Contactless Identity Card, IET Information Security, vol. 11, iss. 6, pp. 386-393(7), 2017.
@Article{RG-IFS-17,   author = {Ricardo J. Rodr\'{i}guez and Juan Carlos Garcia-Escartin},   title = {{Security Assessment of the Spanish Contactless Identity Card}},   journal = {IET Information Security},   year = {2017},   volume = {11},   number = {6},   pages = {386--393(7)},   month = nov, abstract = {The theft of personal information to fake the identity of a person is a common threat normally performed by individual criminals, terrorists, or crime rings to commit fraud or other felonies. Recently, the Spanish identity card, which provides enough information to hire on-line products such as mortgages or loans, was updated to incorporate a Near Field Communication (NFC) chip as electronic passports do. This contactless interface brings a new attack vector for criminals, who might take advantage of the RFID communication to virtually steal personal information. In this paper, we consider as case study the recently deployed contactless Spanish identity card assessing its security against identity theft. In particular, we evaluated the security of one of the contactless access protocol as implemented in the contactless Spanish identity card, and found that no defenses against on-line brute-force attacks were incorporated. We then suggest two countermeasures to protect against these attacks. Furthermore, we also analyzed the pseudo-random number generator within the card, which passed all the performed tests with good results.},   doi = {10.1049/iet-ifs.2017.0299},   issn = {1751-8709},   publisher = {Institution of Engineering and Technology},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RG-IFS-17.pdf},   }
• [2017,article] bibtex
R. J. Rodríguez, A Petri Net Tool for Software Performance Estimation Based on Upper Throughput Bounds, Automated Software Engineering, vol. 24, iss. 1, pp. 73-99, 2017.
@Article{R-AUSE-17,   author = {Ricardo J. Rodr\'{i}guez},   title = {{A Petri Net Tool for Software Performance Estimation Based on Upper Throughput Bounds}},   journal = {Automated Software Engineering},   year = {2017},   volume = {24},   number = {1},   pages = {73--99},   month = jan, abstract = {Functional and non-functional properties analysis (i.e., dependability, security, or performance) ensures that requirements are fulfilled during the design phase of software systems. However, the Unified Modelling Language (UML), standard de facto in industry for software systems modelling, is unsuitable for any kind of analysis but can be tailored for specific analysis purposes through profiling. For instance, the MARTE profile enables to annotate performance data within UML models that can be later transformed to formal models (e.g., Petri nets or Timed Automatas) for performance evaluation. A performance (or throughput) estimation in such models normally relies on a whole exploration of the state space, which becomes unfeasible for large systems. To overcome this issue upper throughput bounds are computed, which provide an approximation to the real system throughput with a good complexity-accuracy trade-off. This paper introduces a tool, named PeabraiN, that estimates the performance of software systems via their UML models. To do so, UML models are transformed to Petri nets where performance is estimated based on upper throughput bounds computation. PeabraiN also allows to compute other features on Petri nets, such as the computation of upper and lower marking place bounds, and to simulate using an approximate (continuous) method. We show the applicability of PeabraiN by evaluating the performance of a building closed circuit TV system.},   doi = {10.1007/s10515-015-0186-2},   url = {http://webdiis.unizar.es/~ricardo/files/papers/R-AUSE-17.pdf} }
• [2017,article] bibtex
R. J. Rodrguez and J. Campos, On Throughput Approximation of Resource Allocation Systems by Bottleneck Regrowing, IEEE Transactions on Control Systems Technology, 2017.
@Article{RC-TCST-17,   author = {Ricardo J. Rodrguez and Javier Campos},   title = {{On Throughput Approximation of Resource Allocation Systems by Bottleneck Regrowing}},   journal = {IEEE Transactions on Control Systems Technology},   year = {2017},   note = {Accepted for publication. To appear.},   abstract = {Complex systems such as manufacturing, logistics, or web services, are commonly modeled as Discrete Event Systems dealing with the resource-allocation problem. In particular, Petri nets are a widely used formalism to model these systems. Although their functional properties have been extensively studied in the literature, their non-functional properties (such as throughput) have usually been ignored. In this paper, we focus on a Petri net subclass useful for modeling concurrent sequential processes with shared resources, termed as $S^4PR$ nets. For these nets, we present an iterative strategy that makes intensive use of mathematical programming problems to approximate system throughput. Initially, our strategy selects the slowest part (a subsystem) of the net. Then, the next slowest parts are considered. In each step, the throughput is computed solving analytically the underlying CTMC when feasible (or by simulation, otherwise). Since only certain subsystems are considered, the state-explosion problem inherent to the increasing net size is mitigated. We evaluate our strategy in a set of randomly generated $S^4PR$ nets. Our findings show that the throughput improves the upper throughput bound computation by almost 20\% and that small portions of the net are enough to approximate system throughput.},   doi = {10.1109/TCST.2017.2768512},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RC-TCST-17.pdf},   }
• [2016,article] bibtex
R. J. Rodríguez, I. Rodríguez-Gastón, and J. Alonso, Towards the Detection of Isolation-Aware Malware, IEEE Latin America Transactions (Revista IEEE America Latina), vol. 14, iss. 2, pp. 1024-1036, 2016.
@Article{RRA-LATAM-16,   author = {Ricardo J. Rodr\'{i}guez and I\~{n}aki Rodr\'{i}guez-Gast\'{o}n and Javier Alonso},   title = {{Towards the Detection of Isolation-Aware Malware}},   journal = {IEEE Latin America Transactions (Revista IEEE America Latina)},   year = {2016},   volume = {14},   number = {2},   pages = {1024--1036},   month = feb, abstract = {Malware analysis tools have evolved in the last years providing tightly controlled sandbox and virtualised environments where malware is analysed minimising potential harmful consequences. Unfortunately, malware has advanced in parallel, being currently able to recognise when is running in sandbox or virtual environments and then, behaving as a non-harmful application or even not executing at all. This kind of malware is usually called analysis-aware malware. In this paper, we propose a tool to detect the evasion techniques used by analysis-aware malware within sandbox or virtualised environments. Our tool uses Dynamic Binary Instrumentation to maintain the binary functionality while executing arbitrary code. We evaluate the tool under a set of well-known analysis-aware malware showing its current effectiveness. Finally, we discuss limitations of our proposal and future directions.},   doi = {10.1109/TLA.2016.7437254},   issn = {1548-0992},   keywords = {Instruments;Malware;Proposals;Silicon compounds;Software;Virtual environments;analysis-aware malware;binary analysis;dynamic binary instrumentation},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RRA-LATAM-16.pdf} }
• [2016,article] bibtex
R. J. Rodríguez, On Qualitative Analysis of Fault Trees Using Structurally Persistent Nets, IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 46, iss. 2, pp. 282-293, 2016.
@Article{R-SMC-Sys-16, Title = {{On Qualitative Analysis of Fault Trees Using Structurally Persistent Nets}},   Author = {Ricardo J. Rodr\'{i}guez},   Journal = {IEEE Transactions on Systems, Man and Cybernetics: Systems},   Year = {2016},   Month = {Feb},   Number = {2},   Pages = {282--293},   Volume = {46},   Abstract = {A Fault Tree (FT) defines an undesired top event, characterizing it using logic combinations of lower-level undesired events. In this paper, we focus on coherent FTs, i.e., the logic is restricted to AND/OR formulae. Fault Tree analysis is used to identify and assess the Minimal Cut Sets (MCS) of an FT, which define the minimal set of events leading to the undesired state. The dual of MCS is Minimal Path Set (MPS). MCS and MPS are commonly used for qualitative evaluation of FTs in safety and reliability engineering. This paper explores computation of the MCS/MPS of an FT by means of structural analysis (namely, computation of minimal p-semiflows) of a Petri net that represents the FT. To this end, we propose a formal definition of a coherent FT and a transformation from this model to a Petri net subclass (namely, structurally persistent nets). We also prove the relationship between minimal p-semiflows and MCS/MPS in an FT. In addition, we propose an algorithm that uses linear programming techniques to compute the MCS/MPS in an FT. Finally, we put our findings into practice by qualitatively evaluating the FT of a pressure tank system.},   Doi = {10.1109/TSMC.2015.2437360},   ISSN = {2168-2216},   Keywords = {Fault Trees, Petri nets, Linear Programming, qualitative evaluation, cut sets},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/R-SMC-Sys-16.pdf} }
• [2015,article] bibtex
S. Marrone, R. J. Rodríguez, R. Nardone, F. Flammini, and V. Vittorini, On Synergies of Cyber and Physical Security Modelling in Vulnerability Assessment of Railway Systems, Computers and Electrical Engineering, vol. 47, pp. 275-285, 2015.
@Article{MRNFV-CAEE-15,   author = {Stefano Marrone and Ricardo J. Rodr\'{i}guez and Roberto Nardone and Francesco Flammini and Valeria Vittorini},   title = {{On Synergies of Cyber and Physical Security Modelling in Vulnerability Assessment of Railway Systems}},   journal = {Computers and Electrical Engineering},   year = {2015},   volume = {47},   pages = {275--285},   month = oct, issn = {0045-7906},   abstract = {The multifaceted nature of cyber-physical systems needs holistic study methods to detect essential aspects and interrelations among physical and cyber components. Like the systems themselves, security threats feature both cyber and physical elements. Although to apply divide et impera approaches helps handling system complexity, to consider just one aspect at a time does not provide adequate risk awareness and hence does not allow to design the most appropriate countermeasures. To support this claim, in this paper we provide a joint application of two model-driven techniques for physical and cyber-security evaluation. We apply two UML profiles, namely SecAM (for cyber-security) and CIP\_VAM (for physical security), in combination. In such a way, we demonstrate the synergy between both profiles and the need for their tighter integration in the context of a reference case study from the railway domain.},   doi = {10.1016/j.compeleceng.2015.07.011},   keywords = {Cyber-physical systems, Vulnerability assessment, UML profile, Bayesian networks, Generalized stochastic Petri nets},   url = {http://webdiis.unizar.es/~ricardo/files/papers/MRNFV-CAEE-15.pdf},   }
• [2015,article] bibtex
R. J. Rodríguez, J. Merseguer, and S. Bernardi, Modelling Security of Critical Infrastructures: A Survivability Assessment, The Computer Journal, vol. 58, iss. 10, pp. 2313-2327, 2015.
@Article{RMB-COMPJ-15,   author = {Ricardo J. Rodr\'{i}guez and Jos\'{e} Merseguer and Simona Bernardi},   title = {{Modelling Security of Critical Infrastructures: A Survivability Assessment}},   journal = {The Computer Journal},   year = {2015},   volume = {58},   number = {10},   pages = {2313--2327},   month = oct, abstract = {Critical infrastructures, usually designed to handle disruptions caused by human errors or random acts of nature, define assets whose normal operation must be guaranteed to maintain its essential services for human daily living. Malicious intended attacks to these targets need to be considered during system design. To face with these situations, defense plans must be developed in advance. In this paper, we present a UML profile, named SecAM, that enables the modelling and security specification for critical infrastructures during the early phases (requirements, design) of systems development life-cycle. SecAM endows security assessment, through survivability analysis, of different security solutions before system deployment. As a case study, we evaluate the survivability of the Saudi Arabia crude-oil pipeline network under two different attack scenarios. The stochastic analysis, carried out with Generalized Stochastic Petri nets, quantitatively estimates the minimisation of attack damages into the crude-oil network.},   doi = {10.1093/comjnl/BXU096},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RMB-COMPJ-15.pdf},   }
• [2014,article] bibtex
R. J. Rodríguez, J. A. Artal, and J. Merseguer, Performance Evaluation of Dynamic Binary Instrumentation Frameworks, IEEE Latin America Transactions (Revista IEEE America Latina), vol. 12, iss. 8, pp. 1572-1580, 2014.
@Article{RAM-LATAM-14,   author = {Ricardo J. Rodr\'{i}guez and Juan Antonio Artal and Jos\'{e} Merseguer},   title = {{Performance Evaluation of Dynamic Binary Instrumentation Frameworks}},   journal = {IEEE Latin America Transactions (Revista IEEE America Latina)},   year = {2014},   volume = {12},   number = {8},   pages = {1572--1580},   month = {December},   abstract = {Code analysis, static or dynamic, is a primary mean for improving correctness and performance of software applications. Dynamic binary analysis (DBA) refers the methods to analyse runtime behaviour of binary code. Nowadays, DBA tools are implemented using dynamic binary instrumentation (DBI) frameworks, which can add arbitrary code into the execution flow of the binary. Unfortunately, a DBA tool increases the execution time of the analysed binary dramatically, as extra code is being executed. Experiments got figures with increments of 26x. Therefore, it is crucial for DBA tool construction to know exact figures about such penalties and their roots. Hence, we conduct a performance evaluation of leading DBI frameworks, namely Pin, Valgrind, and DynamoRIO, for which we have built a benchmark selecting a bunch of representative tools. The evaluation that we procure here provides guidance to choose the best DBI framework suited for different needs. Moreover, the benchmark by itself is a tool ready to be eventually used for performance evaluation of future DBI frameworks.},   doi = {10.1109/TLA.2014.7014530},   issn = {1548-0992},   keywords = {Benchmark testing;Hardware;Instruments;Kernel;Performance evaluation;Silicon compounds;Benchmark;Performance evaluation;Software Tools;Testing},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RAM-LATAM-14.pdf},   }
• [2013,article] bibtex
R. J. Rodríguez, J. Júlvez, and J. Merseguer, Quantification and Compensation of the Impact of Faults in System Throughput, Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability, vol. 227, iss. 6, pp. 614-628, 2013.
@Article{RJM-JRR-13, Title = {{Quantification and Compensation of the Impact of Faults in System Throughput}},   Author = {Ricardo J. Rodr\'{i}guez and Jorge J\'{u}lvez and Jos\'{e} Merseguer},   Journal = {Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability},   Year = {2013},   Month = {December},   Number = {6},   Pages = {614--628},   Volume = {227},   Abstract = {Performability relates the performance (throughput) and reliability of software systems whose normal behaviour may degrade owing to the existence of faults. These systems, naturally modelled as discrete event systems using shared resources, can incorporate fault-tolerant techniques to mitigate such a degradation. In this article, compositional fault-tolerant models based on Petri nets, which make its sensitive performability analysis easier, are proposed. Besides, two methods to compensate existence of faults are provided: an iterative algorithm to compute the number of extra resources needed, and an integer-linear programming problem that minimises the cost of incrementing resources and/or decrementing fault-tolerant activities. The applicability of the developed methods is shown on a Petri net that models a secure database system.},   Doi = {10.1177/1748006X13492284},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RJM-JRR-13.pdf} }
• [2013,article] bibtex
R. J. Rodríguez, J. Júlvez, and J. Merseguer, On the Performance Estimation and Resource Optimisation in Process Petri Nets, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 43, iss. 6, pp. 1385-1398, 2013.
@Article{RJM-SMC-Sys-13,   author = {Ricardo J. Rodr\'{i}guez and Jorge J\'{u}lvez and Jos\'{e} Merseguer},   title = {{On the Performance Estimation and Resource Optimisation in Process Petri Nets}},   journal = {IEEE Transactions on Systems, Man, and Cybernetics: Systems},   year = {2013},   volume = {43},   number = {6},   pages = {1385--1398},   month = mar, issn = {2168-2216},   abstract = {Many artificial systems can be modeled as discrete dynamic systems in which resources are shared among different tasks. The performance of such systems, which is usually a system requirement, heavily relies on the number and distribution of such resources. The goal of this paper is twofold: first, to design a technique to estimate the steady-state performance of a given system with shared resources, and second, to propose a heuristic strategy to distribute shared resources so that the system performance is enhanced as much as possible. The systems under consideration are assumed to be large systems, such as service-oriented architecture (SOA) systems, and modeled by a particular class of Petri nets (PNs) called process PNs. In order to avoid the state explosion problem inherent to discrete models, the proposed techniques make intensive use of linear programming (LP) problems.},   doi = {10.1109/TSMC.2013.2245118},   keywords = {Complexity theory;Computational modeling;Estimation;Optimization;Steady-state;Throughput;Vectors;Discrete event systems (DESs);Petri nets (PNs);performance evaluation;software performance},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RJM-SMC-Sys-13.pdf},   }

Conference Papers

• [2018,inproceedings] bibtex
X. Chang, S. Lv, R. J. Rodrguez, and K. Trivedi, Survivability Model for Security and Dependability Analysis of a Vulnerable Critical System, in Proceedings of the 8th International Workshop on Internet on Things: Privacy, Security and Trust (IoTPST 2018), 2018.
@InProceedings{CLRT-IoTPST-18,   author = {Xiaolin Chang and Shaohua Lv and Ricardo J. Rodrguez and Kishor Trivedi},   title = {{Survivability Model for Security and Dependability Analysis of a Vulnerable Critical System}},   booktitle = {Proceedings of the 8th International Workshop on Internet on Things: Privacy, Security and Trust (IoTPST 2018)},   year = {2018},   note = {To appear.},   abstract = {This paper aims to analyze transient securityand dependability of a vulnerable critical system, undervulnerability-related attack and two reactive defensestrategies, from a severe vulnerability announcement untilthe vulnerability is fully removed from the system. By severe,we mean that the vulnerability-based malware could causesignificant damage to the infected system in terms ofsecurity and dependability while infecting more and morenew vulnerable computer systems. We propose a Markovchain-based survivability model for capturing thevulnerable critical system behaviors during thevulnerability elimination process. A high-level formalismbased on Stochastic Reward Nets is applied to automaticallygenerate and solve the survivability model. Survivabilitymetrics are defined to quantify system attributes. Theproposed model and metrics not only enable us toquantitatively assess the system survivability in terms ofsecurity risk and dependability, but also provide insights onthe system investment decision. Numerical experiments areconstructed to study the impact of key parameters on systemsecurity, dependability and profit.},   url = {http://webdiis.unizar.es/~ricardo/files/papers/CLRT-IoTPST-18.pdf},   }
• [2018,inproceedings] bibtex
R. J. Rodríguez, M. Martín-Pérez, and I. Abadía, A Tool to Compute Approximation Matching between Windows Processes, in Proceedings of the 2018 6th International Symposium on Digital Forensic and Security (ISDFS), 2018, pp. 313-318.
@InProceedings{RMA-ISDFS-18,   author = {Ricardo J. Rodr\'{i}guez and Miguel Mart\'{i}n-P\'{e}rez and Iaki Abad\'{i}a},   title = {{A Tool to Compute Approximation Matching between Windows Processes}},   booktitle = {Proceedings of the 2018 6th International Symposium on Digital Forensic and Security (ISDFS)},   year = {2018},   pages = {313--318},   month = mar, abstract = {Finding identical digital objects (or artifacts) during a forensic analysis is commonly achieved by means of cryptographic hashing functions, such as MD5, SHA1, or SHA-256, to name a few. However, these functions suffer from the {\em avalanche} effect property, which guarantees that if an input is changed slightly the output changes significantly. Hence, these functions are unsuitable for typical digital forensics scenarios where a forensics memory image from a likely compromised machine shall be analyzed. This memory image file contains a snapshot of processes (instances of executable files) which were up on execution when the dumping process was done. However, processes are relocated at memory and contain dynamic data that depend on the current execution and environmental conditions. Therefore, the comparison of cryptographic hash values of different processes from the same executable file will be negative. Bytewise approximation matching algorithms may help in these scenarios, since they provide a similarity measurement in the range $[0,1]$ between similar inputs instead of a yes/no answer (in the range $\{0,1\}$). In this paper, we introduce ProcessFuzzyHash, a Volatility plugin that enables us to compute approximation hash values of processes contained in a Windows memory dump.},   doi = {10.1109/ISDFS.2018.8355372},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RMA-ISDFS-18.pdf},   }
• [2017,inproceedings] bibtex
A. Zimmermann, A. C. Lavista, and R. J. Rodrguez, Some Notes on Rare-Event Simulation Challenges: Fast Abstract, in Proceedings of 11th EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2017), 2017.
@InProceedings{ZCR-VALUETOOLS-17,   author = {Armin Zimmermann and Andrs Canabal Lavista and Ricardo J. Rodrguez},   title = {{Some Notes on Rare-Event Simulation Challenges: Fast Abstract}},   booktitle = {Proceedings of 11th EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2017)},   year = {2017},   publisher = {ACM},   abstract = {Rare-event simulation methods have led to promising algorithms for the quantitative evaluation of systems which are both too complex for a numerical analysis and suffer from unacceptable simulation run times. Dependability prediction during the design of real-life size industrial systems, for instance, can benefit significantly from progress in this field. However, there are still some gaps to be filled to allow general applicability of the existing methods. This fast abstract points out open issues to promote discussion of future research directions in this field.},   doi = {10.1145/3150928.3150963},   url = {http://webdiis.unizar.es/~ricardo/files/papers/ZCR-VALUETOOLS-17.pdf},   }
• [2017,inproceedings] bibtex
Á. Botas, R. J. Rodríguez, V. Matellán, and J. F. García, Empirical Study to Fingerprint Public Malware Analysis Services, in Proceedings of the International Joint Conference SOCO’17-CISIS’17-ICEUTE’17, 2017, pp. 589-599.
@InProceedings{BRMG-CISIS-17,   author = {Botas, {\'A}lvaro and Rodr{\'i}guez, Ricardo J. and Matell{\'a}n, Vicente and Garc{\'i}a, Juan F.},   title = {{Empirical Study to Fingerprint Public Malware Analysis Services}},   booktitle = {Proceedings of the International Joint Conference SOCO'17-CISIS'17-ICEUTE'17},   year = {2017},   volume = {649},   series = {Advances in Intelligent Systems and Computing},   pages = {589--599},   publisher = {Springer International Publishing},   abstract = {The evolution of malicious software (malware) analysis tools provided controlled, isolated, and virtual environments to analyze malware samples. Several services are found on the Internet that provide to users automatic system to analyze malware samples, as VirusTotal, Jotti, or ClamAV, to name a few. Unfortunately, malware is currently incorporating techniques to recognize execution onto a virtual or sandbox environment. When analysis environment is detected, malware behave as a benign application or even show no activity. In this work, we present an empirical study and characterization of automatic public malware analysis services. In particular, we consider 26 different services. We also show a set of features that allow to easily fingerprint these services as analysis environments. Finally, we propose a method to mitigate fingerprinting.},   doi = {10.1007/978-3-319-67180-2_57},   isbn = {978-3-319-67180-2},   url = {http://webdiis.unizar.es/~ricardo/files/papers/BRMG-CISIS-17.pdf},   }
• [2016,incollection] bibtex
R. J. Rodríguez and S. Marrone, Model-Based Vulnerability Assessment of Self-Adaptive Protection Systems, , Novais, P., Camacho, D., Analide, C., El Fallah Seghrouchni, A., and Badica, C., Eds., Springer International Publishing, 2016, vol. 616, pp. 439-449.
@InCollection{RM-WSRL-16, Title = {{Model-Based Vulnerability Assessment of Self-Adaptive Protection Systems}},   Author = {Ricardo J. Rodr\'{i}guez and Stefano Marrone},   Booktitle = {Intelligent Distributed Computing IX},   Publisher = {Springer International Publishing},   Year = {2016},   Editor = {Novais, Paulo and Camacho, David and Analide, Cesar and El Fallah Seghrouchni, Amal and Badica, Costin},   Pages = {439--449},   Series = {Studies in Computational Intelligence},   Volume = {616},   Abstract = {Security mechanisms are at the base of modern computer systems, demanded to be more and more reactive to changing environments and malicious intentions. Security policies unable to change in time are destined to be exploited and thus, system security compromised. However, the ability to properly change security policies is only possible once the most effective mechanism to adopt under specific conditions is known. This paper proposes a model-based approach to accomplish this goal: a vulnerability model of the system is built by means of a model-based, layered security approach, and used to quantitatively evaluate the best protection mechanism at a given time and hence, to adapt the system to changing environments. The evaluation relies on the use of a powerful, flexible formalism such as Dynamic Bayesian Networks.},   Doi = {10.1007/978-3-319-25017-5_41},   ISBN = {978-3-319-25015-1},   Language = {English},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RM-WSRL-16.pdf} }
• [2016,inproceedings] bibtex
R. Nardone, R. J. Rodríguez, and S. Marrone, Formal Security Assessment of Modbus Protocol, in Proceedings of the 11th International Conference for Internet Technology and Secured Transactions, 2016, pp. 142-147.
@InProceedings{NRM-ITST-16,   author = {R. Nardone and R. J. Rodr\'{i}guez and S. Marrone},   title = {{Formal Security Assessment of Modbus Protocol}},   booktitle = {Proceedings of the 11th International Conference for Internet Technology and Secured Transactions},   year = {2016},   pages = {142--147},   month = dec, publisher = {IEEE},   abstract = {Critical infrastructures as water treatment, power distribution, or telecommunications provide essential services to our day-to-day basis. Any service discontinuity may have a high impact into our society and even our safety. Thus, security of these systems against intentional threats must be guaranteed. However, many of these systems are based on protocols initially designed to operate on closed, unroutable networks, making them an easy target for cybercriminals. In this regard, Modbus is a widely adopted protocol in control systems. Modbus protocol, however, lacks for security properties and is vulnerable to plenty of attacks (as spoofing, flooding, or replay to name a few). In this paper, we propose a formal modeling of Modbus protocol using an extension of hierarchical state-machines that is automatically transformed to a Promela model. This model allows us to find counterexamples of security properties by model-checking. In particular, in this paper we prove the existence of man-in-the-middle attack in Modbus protocol. Our approach also allows to formally evaluate security properties in future extensions of Modbus protocols.},   doi = {10.1109/ICITST.2016.7856685},   url = {http://webdiis.unizar.es/~ricardo/files/papers/NRM-ICITST-16.pdf},   }
• [2016,inproceedings] bibtex
L. García and R. J. Rodríguez, A Peek Under the Hood of iOS Malware, in Proceedings of the 2016 11th International Conference on Availability, Reliability and Security (ARES), 2016, pp. 590-598.
@InProceedings{GR-WMA-16,   author = {Laura Garc\'{i}a and Ricardo J. Rodr\'{i}guez},   title = {{A Peek Under the Hood of iOS Malware}},   booktitle = {Proceedings of the 2016 11th International Conference on Availability, Reliability and Security (ARES)},   year = {2016},   pages = {590--598},   month = aug, abstract = {Malicious software specially crafted to proliferate in mobile platforms are becoming a serious threat, as reported by numerous software security vendors during last years. Android and iOS are nowadays the leaders of mobile OS market share. While malware targeting Android are largely studied, few attention is paid to iOS malware. In this paper, we fill this gap by studying and characterizing malware targeting iOS devices. To this regard, we study the features of iOS malware and classify samples of 36 iOS malware families discovered between 2009 and 2015. We also show the methodology for iOS malware analysis and provide a detailed analysis of a malware sample. Our findings evidence that most of them are distributed out of official markets, target jailbroken iOS devices, and very few exploit any vulnerability.},   doi = {10.1109/ARES.2016.15},   keywords = {iOS, malware, attacks, threats, classification},   url = {http://webdiis.unizar.es/~ricardo/files/papers/GR-WMA-16.pdf},   }
• [2016,inproceedings] bibtex
R. J. Rodríguez, X. Chang, X. Li, and K. S. Trivedi, Survivability Analysis of a Computer System under an Advanced Persistent Threat Attack, in Proceedings of the 3rd International Workshop on Graphical Models for Security, 2016, pp. 134-149.
@InProceedings{RCLT-GraMSec-16,   author = {Ricardo J. Rodr\'{i}guez and Xiaolin Chang and Xiaodan Li and Kishor S. Trivedi},   title = {{Survivability Analysis of a Computer System under an Advanced Persistent Threat Attack}},   booktitle = {Proceedings of the 3rd International Workshop on Graphical Models for Security},   year = {2016},   editor = {Kordy, Barbara and Ekstedt, Mathias and Kim, Seong Dong},   volume = {9987},   pages = {134--149},   abstract = {Computer systems are potentially targeted by cybercriminals by means of specially crafted malicious software called Advanced Persistent Threats (APTs). As a consequence, any security attribute of the computer system may be compromised: disruption of service (availability), unauthorized data modification (integrity), or exfiltration of sensitive data (confidentiality). An APT starts with the exploitation of software vulnerability within the system. Thus, vulnerability mitigation strategies must be designed and deployed in a timely manner to reduce the window of exposure of vulnerable systems. In this paper, we evaluate the survivability of a computer system under an APT attack using a Markov model. Generation and solution of the Markov model are facilitated by means of a high-level formalism based on stochastic Petri nets. Survivability metrics are defined to quantify security attributes of the system from the public announcement of a software vulnerability and during the system recovery. The proposed model and metrics not only enable us to quantitatively assess the system survivability in terms of security attributes but also provide insights on the cost/revenue trade-offs of investment efforts in system recovery such as vulnerability mitigation strategies. Sensitivity analysis through numerical experiments is carried out to study the impact of key parameters on system secure survivability.},   doi = {10.1007/978-3-319-46263-9_9},   keywords = {APT, Cyberattacks, Markov chains, Stochastic reward nets, Security metrics, Survivability, Transient analysis},   url = {http://webdiis.unizar.es/~ricardo/files/papers/RCLT-GraMSec-16.pdf},   }
• [2015,inproceedings] bibtex
E. Gómez-Martínez, R. J. Rodríguez, L. Etxeberria, M. Illarramendi, and C. Benac, Model-Based Verification of Safety Contracts, in Proceedings of the 1st International Workshop on Safety and Formal Methods (SaFoMe), 2015, pp. 101-115.
@InProceedings{GREIB-SAFOME-15, Title = {{Model-Based Verification of Safety Contracts}},   Author = {Elena G\'{o}mez-Mart\'{i}nez and Ricardo J. Rodr\'{i}guez and Leire Etxeberria and Miren Illarramendi and Clara Benac},   Booktitle = {Proceedings of the 1st International Workshop on Safety and Formal Methods (SaFoMe)},   Year = {2015},   Pages = {101--115},   Publisher = {Springer International Publishing},   Series = {Lecture Notes in Computer Science},   Volume = {8938},   Abstract = {The verification of safety becomes crucial in critical systems where human lives depend on the correct functioning of such systems. Formal methods have often been advocated as necessary to ensure the reliability of software systems, albeit with a considerable effort. In any case, such an effort is cost-effective when verifying safety-critical systems. Safety requirements are usually expressed using safety contracts, in terms of assumptions and guarantees. To facilitate the adoption of formal methods in the safety-critical software industry, we propose the use of well-known modelling languages, such as UML, to model a software system, and the use of OCL to express the system safety contracts within UML. A UML model enriched with OCL constraints is then transformed to a Petri net model that enables to formally verify such safety contracts. We apply our approach to an industrial case study that models a train doors controller in charge of the opening and closing of train doors. Our approach allows to perform an early safety verification, which increases the confidence of software engineers while designing the system.},   Doi = {10.1007/978-3-319-15201-1_7},   ISBN = {978-3-319-15200-4},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/GREIB-SAFOME-15.pdf} }
• [2015,inproceedings] bibtex
J. Vila and R. J. Rodríguez, Practical Experiences on NFC Relay Attacks with Android: Virtual Pickpocketing Revisited, in Proceedings of the 11th International Workshop on RFID Security (RFIDsec), 2015, pp. 87-103.
@InProceedings{VR-RFIDsec-15, Title = {{Practical Experiences on NFC Relay Attacks with Android: Virtual Pickpocketing Revisited}},   Author = {Jos\'{e} Vila and Ricardo J. Rodr\'{i}guez},   Booktitle = {Proceedings of the 11th International Workshop on RFID Security (RFIDsec)},   Year = {2015},   Pages = {87--103},   Publisher = {Springer},   Series = {Lecture Notes in Computer Science},   Volume = {9440},   Abstract = {Near Field Communication (NFC) is a short-range contactless communication standard recently emerging as cashless payment technology. However, NFC has been proved vulnerable to several threats, such as eavesdropping, data modification, and relay attacks. A relay attack forwards the entire wireless communication, thus communicating over larger distances. In this paper, we review and discuss feasibility limitations when performing these attacks in Google's Android OS. We show an experiment proving its feasibility using off-the-shelf NFC-enabled Android devices (i.e., no custom firmware nor root required). Thus, Android NFC-capable malicious software might appear before long to virtually pickpocket contactless payment cards within its proximity.},   Doi = {10.1007/978-3-319-24837-0_6},   Keywords = {NFC; security; relay attacks; Android; contactless payment},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/VR-RFIDsec-15.pdf} }
• [2015,inproceedings] bibtex
A. Botas, R. J. Rodríguez, T. Vaisanen, and P. Zdzichowski, Counterfeiting and Defending the Digital Forensic Process, in Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015, pp. 1966-1971.
@InProceedings{BRVZ-CEWE-15, Title = {{Counterfeiting and Defending the Digital Forensic Process}},   Author = {Alvaro Botas and Ricardo J. Rodr\'{i}guez and Teemu Vaisanen and Patrycjusz Zdzichowski},   Booktitle = {Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM)},   Year = {2015},   Month = {Oct},   Pages = {1966--1971},   Publisher = {IEEE},   Abstract = {During the last years, criminals have become aware of how digital evidences that lead them to courts and jail are collected and analyzed. Hence, they have started to develop anti-forensic techniques to evade, hamper, or nullify their evidences. Nowadays, these techniques are broadly used by criminals, causing the forensic analysis to be in a state of decay. To defeat against these techniques, forensic analyst need to first identify them, and then to mitigate somehow their effects. In this paper, we review the anti-forensic techniques and propose a new taxonomy that relates them to the initial phase of a forensic process mainly affected by each technique. Furthermore, we introduce mitigation techniques for these anti-forensic techniques, considering the chance to overcome the anti-forensic techniques and the difficulty to apply them.},   Doi = {10.1109/CIT/IUCC/DASC/PICOM.2015.291},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/BRVZ-CEWE-15.pdf} }
• [2014,inproceedings] bibtex
R. J. Rodríguez and S. Punnekkat, Cost Optimisation in Certification of Software Product Lines, in Proceedings of the 2014 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), 2014, pp. 509-514.
@InProceedings{RP-ISSREW-14, Title = {{Cost Optimisation in Certification of Software Product Lines}},   Author = {Ricardo J. Rodr\'{i}guez and Sasikumar Punnekkat},   Booktitle = {Proceedings of the 2014 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)},   Year = {2014},   Month = {November},   Pages = {509--514},   Publisher = {IEEE},   Abstract = {Safety-critical systems (such as those in automotive, avionics, railway domains), where a failure can result in accidents with fatal consequences, need to certify their products as per domain-specific safety standards. Safety certification is not only time consuming but also consumes the project budget. Adopting a reuse oriented development and certification paradigm can be highly beneficial in such systems. Though there had been several research efforts on cost models in the context of software reuse as well as software product lines, none of them have addressed the certification related costs. In this paper, we present a cost model for software product lines, which incorporates certification costs as well. We first propose a mathematical model to represent a Software Product Line and then present an approach to compute, using optimisation theory, the set of artifacts that compose a new product assuring an expected level of confidence (that is, a certain Safety Integrity Level) at an optimised cost level. The proposed approach can help developers and software project managers in making efficient design decisions in relation to the choice of the components for a new product variant development as part of a product line.},   Doi = {10.1109/ISSREW.2014.103},   Keywords = {Computational modeling;Linear programming;Mathematical model;Optimization;Safety;Software;Standards;certification costs;cost model;integer programming;safety certification;software product lines},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RP-ISSREW-14.pdf} }
• [2014,inproceedings] bibtex
R. J. Rodríguez, L. A, Á. Herranz, and J. Mariño, Execution and Verification of UML State Machines with Erlang (Tool Paper), in Proceedings of the 12th International Conference on Software Engineering and Formal Methods (SEFM), 2014, pp. 284-289.
@InProceedings{RFHM-SEFM-14, Title = {{Execution and Verification of UML State Machines with Erlang (Tool Paper)}},   Author = {Ricardo J. Rodr\'{i}guez and Lars-\r{A}ke Fredlund and \'{A}ngel Herranz and Julio Mari\~{n}o},   Booktitle = {Proceedings of the 12th International Conference on Software Engineering and Formal Methods (SEFM)},   Year = {2014},   Editor = {Giannakopoulou, Dimitra and Sala{\"u}n, Gwen},   Pages = {284--289},   Publisher = {Springer},   Series = {Lecture Notes in Computer Science},   Volume = {8702},   Abstract = {Validation of a system design enables to discover specification errors before it is implemented (or tested), thus hopefully reducing the development cost and time. The Unified Modelling Language (UML) is becoming widely accepted for the early specification and analysis of requirements for safety-critical systems, although a better balance between UML’s undisputed flexibility, and a precise unambiguous semantics, is needed. In this paper we introduce UMerL, a tool that is capable of executing and formally verifying UML diagrams (namely, UML state machine, class and object diagrams) by means of a translation of its behavioural information into Erlang. The use of the tool is illustrated with an example in embedded software design.},   Doi = {10.1007/978-3-319-10431-7_22},   ISBN = {978-3-319-10430-0},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RFHM-SEFM-14.pdf} }
• [2014,inproceedings] bibtex
R. J. Rodríguez and E. Gómez-Martínez, Model-based Safety Assessment using OCL and Petri Nets, in Proceedings of the 40th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), 2014, pp. 56-59.
@InProceedings{RG-SEAA-14, Title = {{Model-based Safety Assessment using OCL and Petri Nets}},   Author = {Rodr\'{i}guez, Ricardo J. and G\'{o}mez-Mart\'{i}nez, Elena},   Booktitle = {Proceedings of the 40th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA)},   Year = {2014},   Month = {Aug},   Pages = {56 -- 59},   Abstract = {Safety becomes a primordial assessment in safety-related systems where human lives can be somehow put in risk, needing to comply with safety requirements defined by industry standards such as IEC 61508, ISO 26262 or DO-178C. Safety contracts are useful to specify these requirements (as assumptions and guarantees), thus assuring an expected level of confidence. To verify the safety requirements is measured to represent more than a half of the overall system development costs. In this paper, we propose a model-based verification that addresses safety verification from the early beginning of system development, thus saving costs. Namely, we use UML for system design and Object Constraint Language (OCL) for specifying safety contracts, while its verification is carried out using Petri nets. As case study, we assess the safety of an embedded system that models a fire prevention system in a hospital building.},   Doi = {10.1109/SEAA.2014.36},   Keywords = {Computational modeling;Contracts;Petri nets;Safety;Software;Unified modeling language;OCL;Petri nets;UML;model-based;safety assessment},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RG-SEAA-14.pdf} }
• [2013,inproceedings] bibtex
R. J. Rodríguez, L. A, and Á. Herranz, From UML State-Machine Diagrams to Erlang, in Proceedings of the XIII Spanish Conference on Programming and Computer Languages (PROLE 2013), 2013, pp. 288-299.
@InProceedings{RFH-PROLE-13, Title = {{From UML State-Machine Diagrams to Erlang}},   Author = {Ricardo J. Rodr\'{i}guez and Lars-\r{A}ke Fredlund and \'{A}ngel Herranz},   Booktitle = {Proceedings of the XIII Spanish Conference on Programming and Computer Languages (PROLE 2013)},   Year = {2013},   Month = {September},   Pages = {288--299},   Abstract = {The Unified Modelling Language (UML) is a semi-formal modelling language useful for representing architectural and behavioural aspects of concurrent and distributed systems. In this paper we propose a transformation from UML State-Machine diagrams to Erlang code. Erlang is a functional language, with strict evaluation, single assignment, and dynamic typing, and with good support for concurrency and distribution. The contribution of this transformation is twofold: it can reduce development time, and moreover it permits us to validate UML diagrams at an early development stage through the use of Erlang-based model checking techniques.},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RFH-PROLE-13.pdf} }
• [2012,inproceedings] bibtex
R. J. Rodríguez, C. Trubiani, and J. Merseguer, Fault-Tolerant Techniques and Security Mechanisms for Model-based Performance Prediction of Critical Systems, in Proceedings of the 3rd International Symposium on Architecting Critical Systems (ISARCS), 2012, pp. 21-30.
@InProceedings{RTM-ISARCS-12, Title = {{Fault-Tolerant Techniques and Security Mechanisms for Model-based Performance Prediction of Critical Systems}},   Author = {Ricardo J. Rodr\'{i}guez and Catia Trubiani and Jos{\'e} Merseguer},   Booktitle = {Proceedings of the 3rd International Symposium on Architecting Critical Systems (ISARCS)},   Year = {2012},   Pages = {21--30},   Publisher = {ACM},   Abstract = {Security attacks aim to system vulnerabilities that may lead to operational failures. In order to react to attacks software designers use to introduce Fault-Tolerant Techniques (FTTs), such as recovery procedures, and/or Security Mechanisms (SMs), such as encryption of data. FTTs and SMs inevitably consume system resources, hence they influence the system performance, even affecting its full operability. The goal of this paper is to provide a model-based methodology able to quantitatively estimate the performance degradation due to the introduction of FTTs and/or SMs aimed at protecting critical systems. Such a methodology is able to inform software designers about the performance degradation the system may incur, thus supporting them to find appropriate security strategies while meeting performance requirements. This approach has been applied to a case study in the E-commerce domain, whose experimental results demonstrate its effectiveness.},   Doi = {10.1145/2304656.2304660},   ISBN = {978-3-642-13555-2},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RTM-ISARCS-12.pdf} }
• [2012,inproceedings] bibtex
R. J. Rodríguez, J. Júlvez, and J. Merseguer, PeabraiN: A PIPE Extension for Performance Estimation and Resource Optimisation, in Proceedings of the 12th International Conference on Application of Concurrency to System Designs (ACSD), 2012, pp. 142-147.
@InProceedings{RJM-ACSD-12, Title = {{PeabraiN: A PIPE Extension for Performance Estimation and Resource Optimisation}},   Author = {Ricardo J. Rodr\'{i}guez and Jorge J\'{u}lvez and Jos\'{e} Merseguer},   Booktitle = {Proceedings of the 12th International Conference on Application of Concurrency to System Designs (ACSD)},   Year = {2012},   Pages = {142--147},   Publisher = {IEEE},   Abstract = {Many discrete systems with shared resources from different artificial domains (such as manufacturing, logistics or web services) can be modelled in terms of timed Petri nets. Two studies that may result of interest when dealing with such a systems are the performance evaluation (or completed jobs per unit of time) and the resource optimisation. Exact performance evaluation, however, may become unachievable due to the necessity of an exhaustive exploration of the state-space. In this context, a solution can be to estimate the performance by computing bounds. Resource optimisation leverages a budget and distributes resources in order to maximise the system performance. In this paper, we present PeabraiN, a collection of PIPE tool-compliant modules for performance estimation and resource optimisation based on bounds computation for Stochastic Petri Nets. The algorithms supporting the modules make an intensive use of linear programming techniques and therefore their computational complexity is low. Besides, other PN properties, such as structural enabling bound at a transition, structural marking bound at a place or visit ratios computation, are added to PIPE tool as well.},   Doi = {10.1109/ACSD.2012.13},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RJM-ACSD-12.pdf} }
• [2012,inproceedings] bibtex
R. J. Rodríguez, R. Tolosana-Calasanz, and O. F. Rana, Measuring the Effectiveness of Thottled Data Transfers on Data-Intensive Workflows, in Proceedings of the 6th International KES Conference on Agents and Multi-agent Systems — Technologies and Applications, 2012, pp. 144-153.
@InProceedings{RTR-KES-AMSTA-12, Title = {{Measuring the Effectiveness of Thottled Data Transfers on Data-Intensive Workflows}},   Author = {Ricardo J. Rodr\'{i}guez and Rafael Tolosana-Calasanz and Omer F. Rana},   Booktitle = {Proceedings of the 6th International KES Conference on Agents and Multi-agent Systems -- Technologies and Applications},   Year = {2012},   Editor = {Gordan Jezic and Mario Kusek and Ngoc Thanh Nguyen and Robert J. Howlett and Lakhmi C. Jain},   Pages = {144--153},   Publisher = {Springer},   Series = {Lecture Notes in Computer Science},   Volume = {7327},   Abstract = {In data intensive workflows, which often involve files, transfer between tasks is typically accomplished as fast as the network links allow, and once transferred, the files are buffered/stored at their destination. Where a task requires multiple files to execute (from different previous tasks), it must remain idle until all files are available. Hence, network bandwidth and buffer/storage within a workflow are often not used effectively. In this paper, we are quantitatively measuring the impact that applying an intelligent data movement policy can have on buffer/storage in comparison with existing approaches. Our main objective is to propose a metric that considers a workflow structure expressed as a Directed Acyclic Graph (DAG), and performance information collected from historical past executions of the considered workflow. This metric is intended for use at the design-stage, to compare various DAG structures and evaluate their potential for optimisation (of network bandwidth and buffer use).},   Doi = {10.1007/978-3-642-30947-2_18},   ISBN = {978-3-642-30946-5},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RTR-KES-AMSTA-12.pdf} }
• [2012,inproceedings] bibtex
R. J. Rodríguez, R. Tolosana-Calasanz, and O. F. Rana, Automating Data-Throttling Analysis for Data-Intensive Workflows, in Proceedings of the 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2012, pp. 310-317.
@InProceedings{RTR-CCGrid-12, Title = {{Automating Data-Throttling Analysis for Data-Intensive Workflows}},   Author = {Ricardo J. Rodr\'{i}guez and Rafael Tolosana-Calasanz and Omer F. Rana},   Booktitle = {Proceedings of the 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)},   Year = {2012},   Month = {May},   Pages = {310--317},   Publisher = {IEEE},   Abstract = {Data movement between tasks in scientific workflows has received limited attention compared to task execution. Often the staging of data between tasks is either assumed or the time delay in data transfer is considered to be negligible (compared to task execution). Where data consists of files, such file transfers are accomplished as fast as the network links allow, and once transferred, the files are buffered/stored at their destination. Where a task requires multiple files to execute (from different tasks), it must, however, remain idle until all files are available. Hence, network bandwidth and buffer/storage within a workflow are often not used effectively. We propose an automated workflow structural analysis method for Directed Acyclic Graphs (DAGs) which utilises information from previous workflow executions. The method obtains data-throttling values for the data transfer to enable network bandwidth and buffer/storage capacity to be managed more efficiently. We convert a DAG representation into a Petri net model and analyse the resulting graph using an iterative method to compute data-throttling values. Our approach is demonstrated using the Montage workflow.},   Doi = {10.1109/CCGrid.2012.27},   ISBN = {978-1-4673-1395-7},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RTR-CCGrid-12.pdf} }
• [2010,inproceedings] bibtex
R. J. Rodríguez and J. Merseguer, Integrating Fault-Tolerant Techniques into the Design of Critical Systems, in Proceedings of the 1st International Symposium on Architecting Critical Systems (ISARCS), Prague, Czech Republic, 2010, pp. 33-51.
@InProceedings{RM-ISARCS-10, Title = {{Integrating Fault-Tolerant Techniques into the Design of Critical Systems}},   Author = {Ricardo J. Rodr\'{i}guez and Jos{\'e} Merseguer},   Booktitle = {Proceedings of the 1st International Symposium on Architecting Critical Systems (ISARCS)},   Year = {2010},   Address = {Prague, Czech Republic},   Editor = {Holger Giese},   Month = {June},   Pages = {33--51},   Publisher = {Springer},   Series = {Lecture Notes in Computer Science},   Volume = {6150},   Abstract = {Software designs equipped with specification of dependability techniques can help engineers to develop critical systems. In this work, we start to envision how a software engineer can assess that a given dependability technique is adequate for a given software design, i.e., if the technique, when applied, will cause the system to meet a dependability requirement (e.g., an availability degree). So, the idea here presented is how to integrate already developed fault-tolerant techniques in software designs for their analysis. On the one hand, we will assume software behavioural designs as a set of UML state-charts properly annotated with profiles to take into account its performance, dependability and security characteristics, i.e., those properties that may hamper a critical system. On the other hand, we will propose UML models for well-known fault-tolerant techniques. Then, the challenge is how to combine both (the software design and the FT techniques) to assist the software engineer. We will propose to accomplish it through a formal model, in terms of Petri nets, that offers results early in the life-cycle.},   Doi = {10.1007/978-3-642-13556-9_3},   ISBN = {978-3-642-13555-2},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RM-ISARCS-10.pdf} }
• [2010,inproceedings] bibtex
R. J. Rodríguez and J. Júlvez, Accurate Performance Estimation for Stochastic Marked Graphs by Bottleneck Regrowing, in Proceedings of the 7th European Performance Engineering Workshop (EPEW), 2010, pp. 175-190.
@InProceedings{RJ-EPEW-10, Title = {{Accurate Performance Estimation for Stochastic Marked Graphs by Bottleneck Regrowing}},   Author = {Ricardo J. Rodr\'{i}guez and Jorge J\'{u}lvez},   Booktitle = {Proceedings of the 7th European Performance Engineering Workshop (EPEW)},   Year = {2010},   Month = {September},   Pages = {175--190},   Publisher = {Springer},   Series = {Lecture Notes in Computer Science},   Volume = {6342},   Abstract = {The adequate system performance is usually a critical requirement to be checked during the verification phase of a system. Thus, accurately measuring the performance of current industrial systems, which are often modelled as a Discrete Event Systems (DES), is a need. Due to the state explosion problem, the performance evaluation of DES becomes increasingly difficult as the size of the systems increases. Some approaches, such as the computation of performance bounds, have been developed to overcome this problem. In this paper we propose a new method to produce performance bounds that are sharper than the ones that can be achieved with current methods. The core of our method is an iterative algorithm that initially considers the most constraining bottleneck cycle of the system and adds other cycles to it in each iteration. The proposed method is deeply explained and then applied to a broad set of Marked Graphs.},   Doi = {10.1007/978-3-642-15784-4_12},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RJ-EPEW-10.pdf} }
• [2010,inproceedings] bibtex
R. J. Rodríguez, J. Merseguer, and S. Bernardi, Modelling and Analysing Resilience as a Security Issue within UML, in Proceedings of the 2nd International Workshop on Software Engineering for Resilient Systems (SERENE), London, United Kingdom, 2010, pp. 42-51.
@InProceedings{RMB-SERENE-10, Title = {{Modelling and Analysing Resilience as a Security Issue within UML}},   Author = {Ricardo J. Rodr\'{i}guez and Jos\'{e} Merseguer and Simona Bernardi},   Booktitle = {Proceedings of the 2nd International Workshop on Software Engineering for Resilient Systems (SERENE)},   Year = {2010},   Address = {London, United Kingdom},   Month = {April},   Pages = {42--51},   Publisher = {ACM},   Abstract = {Modelling system security is not common practise in software projects yet. Among other problems, there is not a widely accepted methodology which unifies the actual heterogeneity of security issues when addressing a whole security specification. Certainly, the reality is even worse since there is not an accepted or standard common notation for carrying out the security specification. In this work, we study how modelling security issues, specifically resilience, could be integrated in the MARTE-DAM framework, which allows the expression of performance and dependability requirements in UML models. We base this claim on the close relationship between security and dependability. Indeed, MARTE proposes a framework for non-functional properties specification (NFP), while DAM exploits it for dependability purposes. So, our goal is to take advantage of the common NFP framework while the dependability and security concerns are modelled in a unified view. On the other hand, we consider that the resulting security specification will be useful for developing model in which security related properties, such as availability, will be analysed. We will clarify these claims by means of an example.},   Doi = {10.1145/2401736.2401741},   Url = {http://webdiis.unizar.es/~ricardo/files/papers/RMB-SERENE-10.pdf} }

Others

• [2011,techreport] bibtex
R. J. Rodríguez, J. Merseguer, and S. Bernardi, A Security Analysis and Modelling profile: an Overview, Dpto. de Ingeniera e Informtica de Sistemas, Universidad de Zaragoza, RR-01-11, 2011.
@TechReport{RMB-TR-11,   author = {Ricardo J. Rodr\'{i}guez and Jos\'{e} Merseguer and Simona Bernardi},   title = {{A Security Analysis and Modelling profile: an Overview}},   institution = {{Dpto. de Ingeniera e Informtica de Sistemas, Universidad de Zaragoza}},   year = {2011},   number = {{RR-01-11}},   abstract = {Security is a non-functional property that, at least, should be documented early in the software life-cycle, but also assessed whenever possible in these stages. However, modelling system security is not a common practise in software projects yet and there is not a standard for this purpose. UML, the de facto standard as modelling language, can be tailored for specific purposes through profiling. For example, UML features MARTE, a profile for quantitative analysis of schedulability and performance. Other profile, DAM (which inherits from MARTE) enables UML for quantitative analysis of dependability. Dependability and security share well-known concerns at the core of computing: availability and integrity. In this work, the basis of a security profile for UML is developed. It is built on the MARTE-DAM framework and accounts for the tight relationships between security and dependability. Concretely, access control mechanisms and resilience topic are addressed in this work to establish the main body of the proposed profile. Some examples are used through the paper to illustrate the usage of the profile.},   }
• [,] bibtex
.
@Comment{jabref-meta: databaseType:bibtex;}