Predicting system trustworthyness

of 22 /22
Predicting System Trustworthiness Aashna Garg: 2K11/SE/001 Dhruv Jagetiya: 2K11/SE/026 Manish Gupta: 2K11/SE/036 Priyam Katiyar: 2K11/SE/051

Embed Size (px)

Transcript of Predicting system trustworthyness

  1. 1. Predicting System Trustworthiness Aashna Garg: 2K11/SE/001 Dhruv Jagetiya: 2K11/SE/026 Manish Gupta: 2K11/SE/036 Priyam Katiyar: 2K11/SE/051
  2. 2. Introduction Functional Composability (FC) and functional correctness: o FC is concerned with whether f(a) f(b) = f(a b) is true(where is some mathematical operator, and f(x) is the functionality of component x). o But increasingly, the software engineering community is discovering that FC, even if it were a solved problem ), is still not mature enough for other serious concerns that arise in CBSE and CBD.
  3. 3. Ilities These concerns of software engineering community stem from the problem of composing ilities. o Ilities are nonfunctional properties of software components that define characteristics such as Security Reliability Fault tolerance Performance Availability Safety.
  4. 4. The Problem The problem stems from our inability to know apriori. o For example, that the security of a system composed of two components, A and B, can not be determined from knowledge about the security of A and the security of B. Why? o Because the security of the composite is based on more than just the security of the individual components.
  5. 5. An Example As an example, suppose that: o A is an operating system and B is an intrusion detection system. o Operating systems have some level of built-in authentication security. o Intrusion detection systems have some definition of the types of event patterns that warn of a possible attack.
  6. 6. The Example Continued A as an operating system and B as an intrusion detection system o AND We assume that A provides excellent security and B provides excellent security, o WE MUST still accept the fact that the security of B is also a function of calendar time. So the question then comes down to: which "ilities", if any, are easy to compose? o The answer is that there are no "ilities" easy to compose and that some are much harder to compose than others.
  7. 7. Reliability For reliability, consider a two-component system in which component A feeds information to B and B produces the output of the composite. Assume that both components are reliable. What can we assume about the reliability of the composite?
  8. 8. Performance One ility that at least on the surface appears to have the best possibility of successful composability is performance. But even that is problematic from a practical sense. Its performance after composition depends largely on the relevant hardware and other physical resources.
  9. 9. What Can Be Done? Our interest is in creating and deploying qualitative techniques that can augment traditional reliability quantification techniques. We wish to be able to predict the behavior of the software when it is supplied with corrupted information. o By doing so we gain new information about how the software will behave, information that is completely different from the information collected during operational profile-based reliability testing.
  10. 10. Isolating Potential Contributors When software systems fail, confusing and complex liability problems ensue for all parties that have contributed software functionality (whether COTS or custom) to the system. Potential contributors to the system failure include o (1) defective software components. o (2) problems with interfaces between components. o (3) problems with assumptions (contractual requirements) between components. o (4) hidden interfaces and nonfunctional component behaviors that cannot be detected at the component level.
  11. 11. Assumption Our approach here will be to disregard particular reasons for the possible failure of a component or of the interface between components and assume the worst case (i.e., the occurrence of both possibilities). Assumption :- it is possible to predict, a priori, how the composite system will behave as a result of the failure of a particular component
  12. 12. Difference between Fault Injection and Reliability Testing. Reliability testing is the process of test case generation and then running the passing the test cases into the composite system. o More are the test cases more behaviours of the system can be observed. Fault injection is the process of intentionally corrupting the data in one componenet to test its effect on another component.
  13. 13. Interface Propagation analysis The technique for assessing the level of interoperability between COTS software components and custom components presented here is IPA. IPA perturbs (i.e., corrupts) the states that propagate through the interfaces that connect COTS software components to other types of components. By corrupting data going from one component to a successor component, failure of the predecessor is approximated (simulated), and its impact on the successor can be assessed.
  14. 14. How IPA works? To modify the information in a particular component , a small software routine named PERTURB is created that replaces the original output with a corrupted output. By simulating the failure of various software components, we determine whether or not these failures can be tolerated by the remainder of the system.
  15. 15. An example The cos() function (a fine-grained COTS utility for which we do not have access to the source code) can be used in an illustration: double cos(double x) To see how this analysis works, consider an application containing the following code: if (cos(a) > THRESHOLD) { do something }
  16. 16. An example Our objective is to determine how the application will behave if cos() returns incorrect information. if (PERTURB(cos(a)) > THRESHOLD) { do something }
  17. 17. The Result of IPA PERTURB had created corrupt states that in no way reflected how the components could behave while in real operation. The fault injection process was still able to reveal to the designers of the system certain system-level behaviors that were totally unexpected. These behaviors were completely unsafe, and protection against them was essential. It is also possible that other hardware components or human activities associated with the system might also be able to force the system into such hazardous states.
  18. 18. Increasing the efficiency of IPA Finally, it must be acknowledged that the exhaustive fault injection of software components is just as infeasible as the exhaustive testing of software. Therefore the proper approach to maximizing the value added by such a technique is first to identify which portions (functionally speaking) of the system are the most critical, and then analyze how that critical functionality degrades when components on which it depends fail.
  19. 19. Two Additional Useful Techniques for Predicting Component Interoperability How a software component will react when it receives inputs that are outside the range of any profile that the original designers anticipated. Note that here we are not necessarily talking about component input information that is corrupted, but instead input information that is simply outside the nominal operational (probabilistic) range within which reliability testing would normally test the component.
  20. 20. Technique 1 The first technique involves the deliberate inversion of the operational profile originally anticipated by the system designers. If the defined operational profile turns out to be inaccurate, then the only benefit from doing so would be to learn about potentially dangerous output modes from the software, which might be difficult to detect by other means.
  21. 21. Technique 2 The second technique is simply a combination of the previous technique with IPA. This is a situation in which the software is operating in an unusual input mode while being bombarded with corrupt information. This provides a unique assessment of how robust the software is when it is operating under unusual circumstances and receiving corrupt information.
  22. 22. Thank You