A symbiotic relationship between formal methods and security

A Symbiotic Relationship Between Formal Methods and - Semantic Scholar

a symbiotic relationship between formal methods and security

J. F. Leathrum, Jr. R. M. B. F. Morsi and T. E. Leathrum, “Formal verification of J. M. Wing, “A symbiotic relationship between formal methods and security”. and usage of formal methods for designing secure software systems and implemen . methods and shows a symbiotic relationship between formal methods and. that the complete specification of usage control implementation in the SPIN may not Wing, J.M.: A symbiotic relationship between formal methods and security.

What we want is that our controls balance perfectly with the interactions we want or need. So when we test operations we get the big picture of all our relationships, coming and going. We get to see the interconnectedness of the operations in fine detail and we get to map out what makes us, our business, and our operations what they are and can be.

a symbiotic relationship between formal methods and security

Unfortunately, not everything works as configured. Not everyone behaves as trained. Additionally, more and more things are built from pre-fabricated constructs of materials, or source code from pre-defined libraries, or as in the case for training people, from pre-existing experiences. The new builders are only aware of what they put together and not how the pre-fabricated parts work in a new environment with new variables and in new ways.

There are simply an innumerable number of ways in which to mess it up. One extreme is that the problem definition can be over-specified, which will result in the learning system solving narrow instances and not the whole problem of interest.

a symbiotic relationship between formal methods and security

For instance, consider a navigation task of navigating from point A to point B. If the source, A, and destination, B, points are always fixed in the same locations, the learner will likely only learn a policy specific to navigating between those points and not a generalized policy for navigating between any two points. Alternatively, one can just as easily under specify the problem and the task. In this case, if the problem description is too vague it may render learning a hopeless exercise.

Referring to the navigation example, if the learning system is tasked with navigating to point B but is not given any indication as to where B is, it may never find where it needs to go.

Bottom line, getting the problem specification wrong results in inadequate solutions and, in some cases, spectacular failures. These are just a few examples of ways defining problems can go wrong, there are many more. Success in defining a reinforcement learning problem starts by putting considerable thought and effort into what makes up the task and what a viable solution would accomplish.

This is what having subject matter expertise in the problem brings to the table. Without a solid understanding of what the intricacies of a problem are and what a solution looks like it is near impossible to define it correctly.

From this understanding, the problem description can be properly scoped and defined such that the computer only concerns itself with relevant aspects of the problem and efficiently learns generalizable solutions. Thus, reinforcement learning expertise is vitally important. Knowing how to effectively scope a problem and define a learnable objective is something that comes from experience. With those key experiential ingredients in place, the formal process for describing a reinforcement learning problem is to define it as what is known as a Markov Decision Process MDP.

A MDP is a mathematical description of a multi-step problem. We will not delve into any of the math behind MDPs, but we will discuss the 4 components and what should be considered while defining them. A MDP is comprised of a state-space, action-space, reward function, and discount factor. The state-space determines what the algorithm can see or monitor when learning a solution to the problem.

A Symbiotic Relationship Between Formal Methods and

Given the current state, the algorithm should have access to all the information it needs to make a decision. It is made up of a set of observable features relevant to solving the problem. For example, in a robotic navigation task, the state-space should include the current position of the robot as a feature and the location of the destination. The State-Space represents everything visible to the algorithm to learn a solution.

It consists of a set of relevant observables. When designing the state-space it is imperative that only relevant features be included and the amount of redundancy across features be minimized.

Learning to Win: Making the Case for Autonomous Cyber Security Solutions

The reason this is crucial is that size of the state-space has an inverse relationship with the solvability of the defined problem. The Action-Space represents every possible action for a given algorithm and the conditions under which the actions can be used. Until recently, incorporating more than a handful of features in a state-space made most problems intractable by reinforcement learning methods.

Fortunately, recent advances in representation learning, specifically deep learning, have made it possible to solve problems with hundreds of state features. Representation learning methods, working in concert with reinforcement learning e. These methods should be used in tackling any practical problem. The Reward Function tells the algorithm whether an action positively or negatively affected the mission.

This feedback signal is the crux of learning and optimizing policies. An action-space defines all the possible actions a reinforcement learning algorithm can take as part of a policy and the conditions under which those actions are valid. In the robotic navigation task an action-space could be comprised of the set of actions: The reward function is the feedback signal the reinforcement learning algorithm uses to learn and optimize policies.

It is a metric of value that tells the algorithm the relative value of every state in the state-space. If the agent achieves something good i. Alternatively, if it does something bad i. Also, any intermediary states i. Designing a reward signal carefully is important as it is what ultimately dictates the behavior of the learned policy.

Consideration must be made to ensure that the agent will not learn an undesirable policy. For example, in a robotic navigation task, if the reward function rewards for reaching the goal in as short a distance possible, the agent may learn a policy that arbitrarily runs over pedestrians because they were on the shortest path. The Discount Factor specifies the value of long-term vs. Finally, the discount factor is a parameter that adjusts how much the algorithm values long-term vs.

Different values for this parameter will change the behavior of learned policies. Gaining experience even when failure is not an option How much experience does a reinforcement learning algorithm require to learn? It depends on the complexity of the problem and algorithm, but in general it is a tremendous amount.

For example, AlphaGo used experience data garnered from 30 million moves made by professional players and then it played against itself many thousands of times, trying new plays, gathering even more experience data [16].

Machine learning algorithms require data to learn — lots of data. Reinforcement learning is no exception; however, it uses experiences as its data. An experience, for a reinforcement learning algorithm, is taking an action in a given state and observing the outcome. The algorithm needs a representative set of experiences trying various actions in different states to learn a policy that sufficiently covers the problem.

In practice, this presents challenging problems — where is all the experience data going to come from and how do we get it? Further, reinforcement learning algorithms needs positive and negative experiences to learn. For example, an autonomous vehicle that cannot reach a target location is no more useful than one that cannot avoid pedestrians.

If negative experiences have real costs e. Where and how do we get data? One of the major reasons reinforcement learning has been so successful in understanding games is that they can be simulated perfectly.

This affords algorithms the opportunity to gather nearly infinite experience at negligible costs. If simulation is plausible, it is, definitively, the best way to gather experience; however, most real-world problem domains, particularly physical environments, cannot be simulated with sufficiently high-fidelity to allow policies learned in simulation to generalize well to real environments.

Additionally, the cost of obtaining real-world experience data, in terms of money, time, and risk, can be too great to make learning policies from scratch practical. Hope for reinforcement learning in realistic domains that cannot be simulated is not lost.

Many real-world problems that could be solved by reinforcement learning algorithms are already being performed by people. Reinforcement learning algorithms can learn from experiences of others.

If the reinforcement learning algorithms observe and record the experiences of, assumed intelligent, people they can use those experiences to bootstrap and learn policies that are as effective, or possibly even more effective, than the ones used by people. This form of reinforcement learning is called learning from demonstration and it is a practical approach to reduce risk in gathering experience. While it does not reduce the amount of experience required, it does reduce the inherent risk in allowing the algorithm to enact potentially unsafe actions.

As the tasks are already being performed by people, the experience gathering comes at little additional cost. Further, if the learning systems are deployed as part of a mixed-initiative system, where both human operators and the machine can make decisions, it proves to be mutually beneficial. The computer would benefit from direct feedback from the operator and the operator can benefit from improved reaction times and, potentially, insights provided by the computer.

Such a mixed-initiative system could even be designed to provide justification to an operator for each decision, which would go a long way towards improving trust in autonomous learning systems. Giving the robot the keys to your car The prospect of autonomous learning machines is both exciting and a bit frightening. A fundamental challenge in fielding any autonomous systems is trust — can the system be trusted to perform as expected without horrific consequences? When and how to trust an autonomous learning machine is both a technical and non-technical challenge.

From a technical perspective, methods that can formally verify an autonomous learning system do not exist. Existing formal verification processes are regularly used to determine if a rule-based control system will perform within specified performance boundaries. For a learning system, however, it is difficult, if not impossible, to predict exactly which actions a learning system will take because it is designed to learn to do things it was not explicitly programmed to do.

Bounded autonomy is one way to allow for existing methods to work; however, defining boundaries is also challenging and the limits it places on learning can prevent the machine from learning the best solutions. As an alternative to formal verification, one could use exhaustive empirical testing to verify a learning system. This is the approach that Waymo and Uber are currently using in their development of autonomous vehicles. The most significant issues with this approach are the time and cost associated with doing the exhaustive testing.

To conduct verification testing effectively, the system must be evaluated under all foreseeable conditions.

ISECOM - Open Source Security Testing Methodology Manual (OSSTMM)

Designing and running the tests are difficult and time-consuming. Additionally, if any major changes are made to the system during testing due to failures or new technologiesthen prior testing must be repeated to validate the new system. Finally, even if the system passes all tests there are still no guarantees as unforeseen conditions will more than likely occur.

On the non-technical side, people tend to be, understandably, skeptical of machines making decisions for them. Autonomous machines are a new thing and it is a bit unnerving to give up control, especially when mistakes in that control can lead to substantial monetary losses or even losses of life.

Given all these challenges, what is the answer? How do we gain trust?

a symbiotic relationship between formal methods and security

Should we even try? Acceptance is often a slow process for major technical advances, but trust can be built. This trust must be earned, which will likely come down to taking measured risks over time. Learning machines are here to stay and, as time moves forward, we will become more accustomed to letting them perform increasingly complex functions. Technology will get better as will testing and verification processes, thus reducing risk and costs. The pioneering adopters will struggle and many will likely fail, but the potential benefits outweigh those risks.

These parameters will be used to determine completion for behavior and actions. We are going to describe each one of them: Each product has a quantity and a price which are used to determine the completion of buy operation. LOGIN schema The second schema is BUY shown in figure 3 which checks if the user has enough balance to buy the product then the product price will be deducted from user balance.

The first schema is LOGIN shown in figure 2 and it checks if the username and passwords are found in system users list, if not, the user is blocked. If the above statements are valid then user state change to online which means that the user logged-in the system. ALERT schema the user has enough balance to buy the product then the product price will be deducted from user balance. BUY schema to improve and add security features. The case study starts with a user who doesn't have enough credit to buy a book.

After several The third schema is RECHARGE shown in figure 4 and its attempts made by the user to do the purchase, there might be a function is to recharge user balance with the entered amount of chance that the system accept the transaction. Consequently, the user will be forbidden from using the website for period of time.

This operation specifies and records suspicious behavior of the user. It also, prohibits him from breaking the rules of buying.

As a result it assures the security of the system specifications. Using formal methods to construct specifications for a secure e-commerce system that is capable to The fourth schema is ALERT shown in figure 5 which changes limit security threats, the result is promising for future expansions user status if the user performs suspicious activities.

Formal methods can be used to formally specify system requirements. This is particularly [9] James W. Formal Methods and useful as those properties or specification checking can be Models.

An Integrated Collection of implemented to be checked automatically during the occurrence Essays, pp.

From Needs to Solutions, pp. Security-relevant Properties of User Interfaces. University of [11] Jim W. The Koblenz - Department of Computer Science.