Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • System Analysis and Modeling In this

    2019-09-10

    System Analysis and Modeling - In this work, a three-level hierarchical modeling – composed by combinatorial and state-space models – is used: (1) First, failure rates () of TMS servers subsystem units are estimated through continuous Markov chains; (2) Then, the single server Neuregulin/Heregulin-1β (NRG-1β/HRG-1β), human recombinant protein failure rate () is produced through a Fault Tree, which OR results of previous phase; (3) Finally, at the top, the overall cluster is modeled. Such a model is specified and analyzed using a formal verification method. In particular, in this paper, the probabilistic model checking tool, namely PRISM [18], has been used. This allows an automatic verification of specific properties of the probabilistic model defined, useful to determine the compliance of the cluster to SIL2 Neuregulin/Heregulin-1β (NRG-1β/HRG-1β), human recombinant protein requirements. Mitigation Strategies Enforcement Most relevant outcomes coming from TMS modeling are leveraged to propose possible ways useful to enhance the final cluster THR. In this work, mitigation strategies are proposed to address both software and hardware failures. Possible identified solutions were: (i) increasing the number M of Active nodes and enforce a M-to-1 configuration; (ii) reducing Single Point of Failure (SPF); (iii) enforcing a software rejuvenation of server nodes to reduce the impact of aging-related failures, and take advantage of the system reboot to alternate the Active node. The latter is chosen as it is a good compromise between costs and impact on the overall THR. Experimental Validation Experiments are really important in the certification process. Standards (e.g. EN50129) clearly state that ”Fail-safe behavior of component under adverse conditions shall be demonstrated”, and it is desirable to obtain ”Evidences that the failure mode will not occur as a result of component ratings being exceeded”. The experimental phase, in this paper, aims at: (1) Demonstrate the goodness of TMS node failure rate estimation; (2) Define the TMS TTARF in order to determine a proper period of rejuvenation. The system is subjected to a stress loading scheme through a workload generator. Then, failure and degradation data sets are used for a QALT/ADT analysis. QALT and ADT are proven as the best solutions to measure reliability metrics and, at the same time, understand and quantify the effects of stress. QALT/ADT are usually leveraged for HW components. However these have been demonstrated feasible to observe the behavior of SW suffering from software aging [20], [22].
    Modeling and analysis The three-level hierarchical modeling of the TMS cluster is here deepened. In the remainder of Incompatibility section different rates – having an exponential distribution – are used. While some, like failure rates, depend on the unit/system under study, others are fixed. The MTTR, e.g., is based on ASTS service agreements, which guarantee units replacement within 18 h s. The Time to Switch – equal to 30s – is evaluated through on-field tests of Active-Standby switches s. In the same manner, the Time to Reboot has been calculated and is equal to 302s s.
    Mitigation strategies Results coming from TMS formal model verification proved that the current system configuration is not compliant with SIL2 bounds. Hence, mitigation strategies need to be defined and applied in order to reach the desired level of reliability. approach consists in using different cluster configurations. In this sense, two possibilities may be pursued, i.e.: An additional approach aims at mitigate the failure probability of most critical components – i.e., the COTS OS and its CRM – which were proven to be the weak points of the cluster (Fig. 5c). Regarding the OS, measures can be taken for example at kernel level as suggested by Pierce et al. [3] that provide thorough guidelines in this sense. The idea is to configure the kernel to serve only the critical application disabling unused modules, driver peripherals, the graphical interface X window system, unused user processes. Other actions can be enforced on Linux SUSE and, in particular, on Pacemaker/Corosync. In fact, there are CRM settings that could affect the response of the system in case of failure conditions and that define policies on the management of the critical service. All of these, however, can provide a little improvement, which is also difficult to quantify.