Free Essay

Uncertainty

In: Other Topics

Submitted By Anonymous12
Words 8126
Pages 33
Reasoning Under Uncertainty

Most tasks requiring intelligent behavior have some degree of uncertainty associated with them.

The type of uncertainty that can occur in knowledge-based systems may be caused by problems with the data. For example:

1. Data might be missing or unavailable

1. Data might be present but unreliable or ambiguous due to measurement errors.

1. The representation of the data may be imprecise or inconsistent.

1. Data may just be user’s best guess.

1. Data may be based on defaults and the defaults may have exceptions.

The uncertainty may also be caused by the represented knowledge since it might

1. Represent best guesses of the experts that are based on plausible or statistical associations they have observed.

1. Not be appropriate in all situations (e.g., may have indeterminate applicability)

Given these numerous sources of errors, most knowledge-based systems require the incorporation of some form of uncertainty management.

When implementing some uncertainty scheme we must be concerned with three issues:

1. How to represent uncertain data

2. How to combine two or more pieces of uncertain data

3. How to draw inference using uncertain data

We will introduce three ways of handling uncertainty:

Probabilistic reasoning.

Certainty factors

Dempster-Shafer Theory

1. Classical Probability

The oldest and best defined technique for managing uncertainty is based on classical probability theory. Let us start to review it by introducing some terms.

Sample space: Consider an experiment whose outcome is not predictable with certainty in advance. However, although the outcome of the experiment will not be known in advance, let us suppose that the set of all possible outcomes is known. This set of all possible outcomes of an experiment is known as the sample space of the experiment and denoted by S.

For example: 10. If the outcome of an experiment consists in the determination of the sex of a newborn child, then S = {g, b}

where the outcome g means that the child is a girl and b that it is a boy.

11. If the experiment consists of flipping two coins, then the sample space consists of the following four points: S = {(H, H), (H, T), (T, H), (T, T)}
Event: any subset E of the sample space is known as an event.

That is, an event is a set consisting of possible outcomes of the experiment. If the outcome of the experiment is contained in E, then we say that E has occurred.

For example, if E = {(H, H), {H, T)}, then E is the event that a head appears on the first coin.

For any event E we define the new event E’, referred to as the complement of E, to consist of all points in the sample space S that are not in E.

Mutually exclusive events: A set of events E1, E2, ..., En in a sample space S, are called mutually exclusive events if Ei ( Ej = (, i ( j, 1( i, j ( n.

A formal theory of probability can be made using three axioms:

1. 0 ( P(E) ( 1.

2. ( P(Ei) = 1 (or P(S) = 1) i

This axiom states that the sum of all events which do not affect each other, called mutually exclusive events, is 1.

As a corollary of this axiom:

P(Ei) + P(Ei’) = 1,

where Ei’ is the complement of event Ei.

3. P(E1 ( E2) = P(E1) + P(E2),

where E1 and E2 are mutually exclusive events. In general, this is also true.
Compound probabilities

Events that do not affect each other in any way are called independent events. For two independent events A and B,

P(A ( B) = P(A) P(B)

Independent events: The events E1, E2, ..., En in a sample space S, are independent if

P(Ei1 ( ... ( Eik) = P(Ei1) ...P(Eik)

for each subset {i1, ...,ik) ({1, ..., n},1( k ( n, n ( 1.

If events A and B are mutually exclusive, then

P(A ( B) = P(A) + P(B)

If events A and B are not mutually exclusive, then

P(A ( B) = P(A) + P(B) - P(A ( B)

This is also called Addition law.
Conditional Probabilities

The probability of an event A, given B occurred, is called a conditional probability and indicated by

P(A | B)

The conditional probability is defined as

P(A ( B) P(A | B) = ------------------, for P(B) ( 0. P(B)

Multiplicative Law of probability for two events is then defined as

P(A ( B) = P(A | B) P(B)

which is equivalent to the following P(A ( B) = P(B | A) P(A)

Generalized Multiplicative Law P(A1 ( A2 ( ... ( An) = P(A1 | A2 ( ... ( An) P(A2 | A3 ( ... ( An) ... P(An-1 | An) P(An)
An example

As an example of probabilities, Table below shows hypothetical probabilities of a disk crash using a Brand X drive within one year.

Brand X Brand X’ Total of Rows

Crash C 0.6 0.1 0.7
No crash C’ 0.2 0.1 0.3
Total of columns 0.8 0.2 1.0

Hypothetical probabilities of a disk crash

X X’ Total of rows

C P(C ( X) P(C ( X’) P(C)
C’ P(C’( X) P(C’ ( X’) P(C’)
Total of columns P(X) P(X’) 1

Probability interpretation of two sets
Using above tables, the probabilities of all events can be calculated. Some probabilities are
(1) P(C) = 0.7
(2) P(C’) = 0.3
(3) P(X) = 0.8
(4) P(X’) = 0.2
(5) P(C ( X) = 0.6 (the probability of a crash and using Brand X)
(6) The probability of a crash, given that Brand X is used, is P(C ( X) 0.6 P(C | X) = ------------- = ------- = 0.75 P(X) 0.8

(7) The probability of a crash, given that Brand X is not used, is P(C ( X’) 0.1 P(C | X’) = ------------- = ------- = 0.50 P(X’) 0.2

Probabilities (5) and (6) may appear to have similar meanings when you read their descriptions. However (5) is simply the intersection of two events, while (6) is a conditional probability.

The meaning of (5) is the following:

IF a disk drive is picked randomly, then 0.6 of the time it will be Brand x and have crashed.

In other words, we are just picking samples from the population of disk drives. Some of those drives are Brand X and have crashed (0.6), some are not Brand X and have crashed (0.1), some are Brand X and have not crashed (0.2), and some are not Brand X and have not crashed (0.1).

In contrast, the meaning of the conditional probability (6) is very different

IF a Brand X disk drive is picked, then 0.75 of the time it will have crashed.

Note also that if any of the following equation is true, then events A and B are independent.

P(A | B) = P(A) or

P(B | A) = P(B) or

P(A ( B) = P(A) P(B).

Bayes’ Theorem

Note that conditional probability is defined as

P(H ( E) P(H | E) = ------------------, for P(E) ( 0. P(E)

i.e., the conditional probability of H given E.

In real-life practice, the probability P(H | E) cannot always be found in the literature or obtained from statistical analysis. The conditional probabilities

P(E | H)

however often are easier to come by;

In medical textbooks, for example, a disease is described in terms of the signs likely to be found in a typical patient suffering from the disease.

The following theorem provides us with a method for computing the conditional probability P(H | E) from the probabilities P(E), P(H) and P(E | H);
From conditional probability:

P(H ( E) P(H | E) = ------------------, P(E)

P(E ( H)
Furthermore, we have, P(E | H) = --------------- P(H)
So,
P(E | H)P(H) = P(H | E)P(E) = P(H ( E)
Thus
P(E | H) P(H) P(H | E) = --------------------- P(E)

This is the Bayes’ Theorem. Its general form can be written in terms of events, E, and hypotheses (assumptions), H, in the following alternative forms.

P(E ( Hi) P(Hi | E) = ------------------- ( P(E ( Hj) j

P(E | Hi) P(Hi) P(E | Hi) P(Hi) = ----------------------- = --------------------- ( P(E | Hj) P(Hj) P(E) j
Hypothetical reasoning and backward induction

Bayes’ Theorem is commonly used for decision tree analysis of business and the social sciences.

The method of Bayesian decision making is also used in expert system PROSPECTOR.

We use oil exploration in PROSPECTOR as an example. Suppose the prospector believes that there is a better than 50-50 chance of finding oil, and assumes the following.

P(O) = 0.6 and P(O’) = 0.4

Using the seismic survey technique, we obtain the following conditional probabilities, where + means a positive outcome and - is a negative outcome

P(+ | O) = 0.8 P(- | O) = 0.2 (false -)
P(+ | O’) = 0.1 (false +) P(- | O’) = 0.9

Using the prior and conditional probabilities, we can construct the initial probability tree as shown below.

[pic]

Using Addition law to calculate the total probability of a + and a - test

P(+) = P(+ ( O) + P(+ ( O’) = 0.48 + 0.04 = 0.52 P(-) = P(- ( O) + P(- ( O’) = 0.12 + 0.36 = 0.48

P(+) and P(-) are unconditional probabilities that can now be used to calculate the posterior probabilities at the site, as shown below.

[pic]
The Figure below shows the Baysian decision tree using the data from the above Figure. The payoffs are at the bottom of the tree.
Thus if oil is found, the payoff is $1,250,000 - $200,000 - $50,000 = $1,000,000 while a decision to quit after the seismic test result gives a payoff of -$50,000.
[pic]
The assumed amounts are:
Oil lease, if successful: $1,250,000
Drilling expense: -$200,000
Seismic survey: -$50,000
In order for the prospector to make the best decision, the expected payoff must be calculated at event node A.

To compute the expected payoff at A, we must work backward from the leaves. This process is called backward induction.

The expected payoff from an event node is the sum of the payoffs times the probabilities leading to the payoffs.

Expected payoff at node C
$846,153 = ($1,000,000) (12/13) - ($1,000,000) (1/13)

Expected payoff at node B
-$500,000 = ($1,000,000) (1/4) - ($1,000,000) (3/4)

[pic]

The decision tree shows the optimal strategy for the prospector. If the seismic test is positive, the site should be drilled, otherwise, the site should be abandoned.

The decision tree is an example of hypothetical reasoning or “what if” type of situations.

By exploring alternate paths of action, we can prune paths that do not lead to optimal payoffs.

Bayes’ rule and knowledge-based systems

As we know, rule-based systems express knowledge in an IF-THEN format:

IF X is true THEN Y can be concluded with probability p

If we observe that X is true, then we can conclude that Y exist with the specified probability. For example

IF the patient has a cold THEN the patient will sneeze (0.75)

But what if we reason abductively and observe Y (i.e., the patient sneezes) while knowing nothing about X (i.e., the patient has a cold)? What can we conclude about it? Bayes’ Theorem describes how we can derive a probability for X.

Within the rule given above, Y (denotes some piece of evidence (typically referred to as E) and X denotes some hypothesis (H) given

P(E | H) P(H)
(1) P(H | E) = ------------------- P(E) or P(E | H) P(H)
(2) P(H | E) = ----------------------------------------- P(E | H)P(H) + P(E | H’)P(H’)

To make this more concrete, consider whether Rob has a cold (the hypothesis) given that he sneezes (the evidence).

Equation (2) states that the probability that Rob has a cold given that he sneezes is the ratio of the probability that he both has a cold and sneezes, to the probability that he sneezes.

The probability of his sneezing is the sum of the conditional probability that he sneezes when he has a cold and the conditional probability that he sneezes when he doesn’t have a cold. In other words, the probability that he sneezes regardless of whether he has a cold or not. Suppose that we know in general

P(H) = P(Rob has a cold) = 0.2 P(E | H) =P(Rob was observed sneezing | Rob has a cold) = 0.75 P(E | H’) = P(Rob was observed sneezing | Rob does not have a cold) = 0.2
Then

P(E) = P(Rob was observed sneezing) = (0.75)(0.2) + (0.2)(0.8) = 0.31

and

P(H | E) =P(Rob has a cold | Rob was observed sneezing) (0.75)(0.2) = --------------- (0.31) = 0.48387

Or Rob’s probability of having a cold given that he sneezes is about 0.5.

We can also determine what his probability of having a cold would be if he was not sneezing:

P(E’ | H)P(H) P(H | E’) = ------------------- P(E’) (1-0.75) (0.2) = ------------------- (1 - 0.31) = 0.07246

So knowledge that he sneezes increasing his probability of having a cold by approximately 2.5, while knowledge that does not sneeze decreases his probability by a factor of almost 3.

Propagation of Belief

Note that what we have just examined is very limited since we have only considered when each piece of evidence affects only one hypothesis.

This must be generalized to deal with “m” hypotheses H1, H2, ... Hm and “n” pieces of evidence E1, ..., En, the situation normally encountered in real-world problems. When these factors are included, Equation (2) becomes

P(Ej1(Ej2(...(Ejk | Hi) P(Hi)
(3) P(Hi | Ej1 (Ej2( ...(Ejk) = ----------------------------------- P(Ej1 (Ej2( ...( Ejk)

P(Ej1 | Hi)P(Ej2 | Hi) ... P(Ejk | Hi)P(Hi) = ------------------------------------------------------- m (P(Ej1 | Hl)P(Ej2 | Hl) ... P(Ejk | Hl)P(Hl) l=1

where {j1, ...,jk) ({1, ..., n}

This probability is called the posterior probability of hypothesis Hi from observing evidence Ej1, Ej2, ..., Ejk.
This equation is derived based on several assumptions:

1. The hypotheses H1, ..., Hm, m ( 1, are mutually exclusive.

1. Furthermore, the hypotheses H1, ..., Hm are collectively exhaustive.

1. The pieces of evidence E1, ..., En, n ( 1, are conditionally independent given any hypothesis Hi, 1 ( i ( m .

Conditional independent: The events E1, E2, ..., En, are conditionally independent given an event H if P(Ej1 ( ... ( Ejk | H) = P(Ej1 | H) ...P(Ejk | H) for each subset {j1, ...,jk) ({1, ..., n}.

This last assumption often causes great difficulties for probabilistic based methods.

For example, two symptoms, A and B, might each independently indicate that some disease is 50 percent likely. Together, however, it might be that these symptoms reinforce (or contradict) each other. Care must be taken to ensure that such a situation does not exist before using the Bayesian approach.
To illustrate how belief is propagated through a system using Bayes’ rule, consider the values shown in the Table below. These values represent (hypothetically) three mutually exclusive and exhaustive hypotheses

1. H1, the patient, Rob, has a cold;

2. H2, Rob has an allergy; and

3. H3, Rob has a sensitivity to light

with their prior probabilities, P(Hi)’s, and two conditionally independent pieces of evidence

1. E1, Rob sneezes and

2. E2, Rob coughs,

which support these hypotheses to differing degrees.

i = 1 i = 2 i = 3 (cold) (allergy) (light sensitive)

P(Hi) 0.6 0.3 0.1
P(E1 | Hi) 0.3 0.8 0.3
P(E2 | Hi) 0.6 0.9 0.0
If we observe evidence E1 (e.g., the patient sneezes), we can compute posterior probabilities for the hypotheses using Equation (3) (where k = 1) to be:

(0.3)(0.6)
P(H1 | E1) = ------------------------------------------ = 0.4 (0.3)(0.6) + (0.8)(0.3) + (0.3)(0.1)

(0.8)(0.3)
P(H2 | E1) = ------------------------------------------ = 0.53 (0.3)(0.6) + (0.8)(0.3) + (0.3)(0.1)

(0.3)(0.1)
P(H3 | E1) = ------------------------------------------ = 0.06 (0.3)(0.6) + (0.8)(0.3) + (0.3)(0.1)

Note that the belief in hypotheses H1 and H3 have both decreased while the belief in hypothesis H2 has increased after observing E1. If E2 (e.g., the patient coughs) is now observed, new posterior probabilities can be computed from Equation (3) (where k = 2):

P(H1 | E1 ( E2) (0.3)(0.6)(0.6) = ------------------------------------------------------------ (0.3)(0.6)(0.6) + (0.8)(0.9)(0.3) + (0.3)(0.0)(0.1) = 0.33

P(H2 | E1 ( E2) (0.8)(0.9)(0.3) = ------------------------------------------------------------ (0.3)(0.6)(0.6) + (0.8)(0.9)(0.3) + (0.3)(0.0)(0.1) = 0.67

P(H3 | E1 ( E2) (0.3)(0.0)(0.1) = ------------------------------------------------------------ (0.3)(0.6)(0.6) + (0.8)(0.9)(0.3) + (0.3)(0.0)(0.1) = 0.0

Hypothesis H3 (e.g., sensitivity to light) has now ceased to be a viable hypothesis and H2 (e.g., allergy) is considered much more likely than H1 (e.g., cold) even though H1 initially ranked higher.
Advantages and disadvantages of Bayesian methods

The Bayesian methods have a number of advantages that indicates their suitability in uncertainty management.

Most significant is their sound theoretical foundation in probability theory. Thus, they are currently the most mature of all of the uncertainty reasoning methods.

While Bayesian methods are more developed than the other uncertainty methods, they are not without faults.

1. They require a significant amount of probability data to construct a knowledge base. Furthermore, human experts are normally uncertain and uncomfortable about the probabilities they are providing.

2. What are the relevant prior and conditional probabilities based on? If they are statistically based, the sample sizes must be sufficient so the probabilities obtained are accurate. If human experts have provided the values, are the values consistent and comprehensive? 3. Often the type of relationship between the hypothesis and evidence is important in determining how the uncertainty will be managed. Reducing these associations to simple numbers removes relevant information that might be needed for successful reasoning about the uncertainties. For example, Bayesian-based medical diagnostic systems have failed to gain acceptance because physicians distrust systems that cannot provide explanations describing how a conclusion was reached (a feature difficult to provide in a Bayesian-based system).

4. The reduction of the associations to numbers also eliminated using this knowledge within other tasks. For example, the associations that would enable the system to explain its reasoning to a user are lost, as is the ability to browse through the hierarchy of evidences to hypotheses.

2: Certainty factors

Certainty factor is another method of dealing with uncertainty. This method was originally developed for the MYCIN system.

One of the difficulties with Bayesian method is that there are too many probabilities required. Most of them could be unknown.

The problem gets very bad when there are many pieces of evidence.

Besides the problem of amassing all the conditional probabilities for the Bayesian method, another major problem that appeared with medical experts was the relationship of belief and disbelief.

At first sight, this may appear trivial since obviously disbelief is simply the opposite of belief. In fact, the theory of probability states that

P(H) + P(H’) = 1

and so

P(H) = 1 - P(H’)
For the case of a posterior hypothesis that relies on evidence, E

(1) P(H | E) = 1 - P(H’ | E)

However, when the MYCIN knowledge engineers began interviewing medical experts, they found that physicians were extremely reluctant to state their knowledge in the form of equation (1).

For example, consider a MYCIN rule such as the following.

IF 1) The stain of the organism is gram positive, and 2) The morphology of the organism is coccus, and 3) The growth conformation of the organism is chains

THEN There is suggestive evidence (0.7) that the identity of the organism is streptococcus

This can be written in terms of posterior probability:

(2) P(H | E1( E2 ( E3) = 0.7

where the Ei correspond to the three patterns of the antecedent.
The MYCIN knowledge engineers found that while an expert would agree to equation (2), they became uneasy and refused to agree with the probability result

(3) P(H’ | E1( E2 ( E3) = 1 - 0.7 = 0.3

This illustrates these numbers such as 0.7 and 0.3 are likelihoods of belief, not probabilities.

Let us have another example.

Suppose this is your last course required for a degree. Assume your grade-point-average (GPA) has not been too good and you need an ‘A’ in this course to bring up your GPA. The following formula may express your belief in the likelihood of graduation.

(4) P(graduating | ‘A’ in this course) = 0.70

Notice that this likelihood is not 100%. The reason it’s not 100% is that a final audit of your course and grades must be made by the school. There could be problem due to a number of reasons that would still prevent your graduation.

Assuming that you agree with (4) (or perhaps your own value for the likelihood) then by equation (1)

(5) P(not graduating | ‘A’ in this course) = 0.30

From a probabilistic point of view, (5) is correct. However, it seems intuitively wrong. It is just not right that if you really work hard and get an ‘A’ in this course, then there is a 30% chance that you won’t graduate. (5) should make you uneasy.

The fundamental problem is that while P(H | E) implies a cause of effect relationship between E and H, there may be no cause and effect relationship between E and H’.

These problems with the theory of probability led the the researchers in MYCIN to investigate other ways of representing uncertainty.

The method that they used with MYCIN was based on certainty factors.

Measures of belief and disbelief

In MYCIN, the certainty factor (CF) was originally defined as the difference between belief and disbelief.

CF(H, E) = MB(H, E) - MD(H, E)

where

CF is the certainty factor in the hypothesis H due to evidence E MB is the measure of increased belief in H due to E MD is the measure of increased disbelief in H due to E

The certainty factor is a way of combining belief and disbelief into a single number.

Combining the measures of belief and disbelief into a single number has some interesting uses.

The certainty factor can be used to rank hypothesis in order of importance.

For example, If a patient has certain symptoms which suggest several possible diseases, then the disease with the highest CF would be the one that is first investigated by ordering tests.

The measures of belief and disbelief were defined in terms of probabilities by

= 1 if P(H) = 1 MB(H, E) max[P(H | E), P(H)] - P(H) = ---------------------------------- otherwise 1 - P(H)

= 1 if P(H) =1 MD(H,E) min [P(H | E), P(H)] - P(H) = ---------------------------------- otherwise - P(H)

According to these definitions, some characteristics are shown in Table 5-1.

____________________________________ Characteristics Values ------------------------------------------------------ Ranges 0 ( MB ( 1 0 ( MD ( 1 -1 ( CF ( 1 ------------------------------------------------------- Certain True Hypothesis MB =1 P(H | E) = 1 MD =0 CF = 1 ------------------------------------------------------- Certain False Hypothesis MB =0 P(H’|E) =1 MD =1 CF = -1 ------------------------------------------------------- Lack of evidence MB = 0 P(H | E) = P(H) MD = 0 CF =0 -------------------------------------------------------

Some Characteristics of MB, MD and CF

The certainty factor, CF, indicates the net belief in hypothesis based on some evidence.

A positive CF means the evidence supports the hypothesis since MB > MD.

A CF = 1 means that the evidence definitely proves the hypothesis.

A CF = 0 means one of two possibilities.

1. First, a CF = MB - MD = 0 could mean that both MB and MD are 0.

1. The second possibility is that MB = MD and both are nonzero. The result is that the belief is canceled out by the disbelief.

A negative CF means that the evidence favors the negation of the hypothesis since MB < MD. Another way of stating this is that there is more reason to disbelief a hypothesis than to belief it.

For example, a CF = -70% means that the disbelief is 70% greater than the belief.

A CF=70% means that the belief is 70% greater than the disbelief.

Certainty factors allow an expert to express a belief without committing a value to the disbelief.

The following equation is true.

CF(H, E) + CF(H’, E) =0

The equation means that evidence supporting a hypothesis reduces support to the negation of the hypothesis by an equal amount so that the sum is always 0.

For the example of the student graduating if an ‘A’ is given in the course

CF(H,E) = 0.70 CF(H’,E) = -0.70

which means (6) I am 70 % certain that I will graduate if I get an ‘A’ in this course. (7) I am -70% certain that I will not graduate if I get an ‘A’ in this course.

0 means no evidence.
So certainty values greater than 0 favor the hypothesis
Certainty factors less than 0 favor the negation of the hypothesis. Statements (6) and (7) are equivalent using certainty factors
The above CF values might be elicited by asking

How much do you believe that getting an ‘A” will help you graduate?

if the evidence is to confirm the hypothesis, or

How much do you disbelief that getting ‘A’ will help you graduate?

An answer of 70% to each question will set CF(H, E) = 0.7, and CF(H’,E) = -0.70.

Calculation with Certainty Factors

Although the original definition of CF was

CF = MB - MD

there were difficulties with this definition

because one piece of disconfirming evidence could control the confirmation of many other pieces of evidence.

For example, ten pieces of evidence might produce a MB = 0.999 and one disconfirming piece with MD = 0.799 could then give

CF = 0.999 - 0.799 = 0.200

The definition of CF was changed in MYCIN in 1977 to be

MB - MD CF = ------------------------ 1 - min(MB, MD)

This softens the effects of a single piece of disconfirming evidence on many confirming pieces of evidence. Under this definition with MB=0.999, MD=0.799

0.999-0.799 0.200 CF = --------------------------- = ------------- = 0.995 1- min(0.999, 0.799) 1 - 0.799

The MYCIN method for combining evidence in the antecedent of a rule are shown in Table 5-2.

------------------------------------------------------------- Evidence, E Antecedent Certainty ------------------------------------------------------------- E1 AND E2 min [CF(E1, e),CF(E2, e)] E1 OR E2 max[CF(E1, e),CF(E2, e)] NOT E -CF(E, e) -------------------------------------------------------------- Table 5-2

For example, given a logical expression for combining evidence such as

E = (E1 AND E2 AND E3) or (E4 AND NOT E5)

the evidence E would be computed as

E = max[min(E1, E2, E3), min(E4, -E5)] for values E1 = 0.9 E2 = 0.8 E3 = 0.3 E4 = -0.5 E5 = -0.4 the result is E = max[min(0.9, 0.8, 0.3), min(-0.5, -(-0.4)] = max[0.3, -0.5] = 0.3

The formula for the CF of a rule

If E THEN H is given by

(8) CF(H,e) = CF(E,e) CF(H,E)

where CF(E,e) is the certainty factor of the evidence E making up the antecedent of the rule base on uncertain evidence e.

CF(H,E) is the certainty factor of hypothesis assuming that the evidence is with certainty, when CF(E,e) = 1. CF(H,e) is the certainty factor of the hypothesis based on uncertain evidence e.
Thus, if all the evidence in the antecedent is known with certainty, the formula for the certainty factor of the hypothesis is

CF(H,e) = CF(H,E)

since CF(E,e) = 1.

See an example. Consider the CF for the streptococcus rule discussed before,

IF 1) The stain of the organism is gram positive, and 2) The morphology of the organism is coccus, and 3) The growth confirmation of the organism is chains THEN There is suggestive evidence (0.7) that the identity of the organism is streptococcus

where the certainty factor of the hypothesis under certain evidence is

CF(H, E) = CF (H, E1(E2(E3) = 0.7

and is also called the attenuation factor.

The attenuation factor is based on the assumption that all the evidence--E1, E2 and E3--is known with certainty. That is,

CF(E1, e) = CF(E2, e) = CF(E3, e) = 1
What happens when all the evidenced are not known with certainty?

In the case of MYCIN, the formula (8) must be used to determine the resulting CF value since CF(H, E1(E2(E3) = 0.7 is no longer valid for uncertain evidence.

For example, assuming CF(E1,e) = 0.5 CF(E2,e) = 0.6 CF(E3,e) = 0.3

then

CF(E,e) = CF(E1(E2(E3,e) = min[CF(E1,e), CF(E2,e), CF(E3,e)] = min[0.5, 0.6, 0.3] = 0.3

The certainty factor of the conclusion is CF(H, e) = CF(E,e) CF(H,E) = 0.3 * 0.7 = 0.21

What happen when another rule also concludes the same hypothesis, but with a different certainty factor?

The certainty factors of rules concluding the same hypothesis is calculated from the combining function for certainty factors defined as

(9) CFCOMBINE(CF1,CF2)

= CF1 + CF2 (1 - CF1) if both CF1 and CF2 > 0

CF1 + CF2 = ------------------------- if one of CF1 and CF2 < 0 1 - min(|CF1|,|CF2|)

= CF1 + CF2 (1 + CF1) if both CF1 and CF2 < 0

where CF1 is CF1(H, e) and CF2 is CF2(H, e).

The formula for CFCOMBINE used depends on whether the individual certainty factors are positive or negative.

The combining function for more than two certainty factors is applied incrementally. That is, the CFCOMBINE is calculated for two CF values, and then the CFCOMBINE is combined using formula (9) with the third CF values, and so forth.
The following figure summarizes the calculations with certainty factors for two rules based on uncertain evidence and concluding the same hypothesis.

[pic]

CF of two rules with the same hypothesis based on uncertain evidence

In our above example, if another rule concludes strepococcus with certainty factor CF2 = 0.5, then the combined certainty using the first formula of (9) is

CFCOMBINE(0.21, 0.5) = 0.21 + 0.5(1 - 0.21) = 0.605

Suppose a third rule also has the same conclusion, but with a CF3 = -0.4. Then the second formula of (9) is used to give 0.605 - 0.4 CFCOMBINE(0.605, -0.4) = -------------------------- 1 - min(|0.605|, |0.4|)

0.205 = --------- = 0.34 1 - 0.4
The CFCOMBINE formula preserves the commutativity of evidence. That is

CFCOMBINE(X,Y) = CFCOMBINE(Y,X)

and so the order in which evidence is received does not affect the result.

Advantages and disadvantages of certainty factors

The CF formalism has been quite popular with expert system developers since its creation because

1. It is a simple computational model that permits experts to estimate their confidence in conclusion being drawn.

2. It permits the expression of belief and disbelief in each hypothesis, allowing the expression of the effect of multiple sources of evidence.

3. It allows knowledge to be captured in a rule representation while allowing the quantification of uncertainty.

4. The gathering of the CF values is significantly easier than the gathering of values for the other methods. No statistical base is required - you merely have to ask the expert for the values.

Many systems, including MYCIN, have utilized this formalism and have displayed a high degree of competence in their application areas. But is this competence due to these systems’ ability to manipulate and reason with uncertainty or is it due to other factors?
Some studies have shown that changing the certainty factors or even turn off the CF reasoning portion of MYCIN does not seems to affect the correct diagnoses much.

This revealed that the knowledge described within the rule contributes much more to the final, derived results than the CF values.

Other criticisms of this uncertainty reasoning method include among others:

1. The CF lack theoretical foundation. Basically, the CF were partly ad hoc. It is an approximation of probability theory.

2. Non-independent evidence can be expressed and combined only by “chunking” it together within the same rule. When large quantities of non-independent evidence must be expressed, this proves to be unsatisfactory

3. the CF values could be the opposite of conditional probabilities.

For example, if P(H1) = 0.8 P(H2) = 0.2 P(H1 | E) = 0.9 P(H2 | E) = 0.8 then CF(H1, E) = 0.5 and CF(H2, E) = 0.75

109. Since one purpose of CF is to rank hypotheses in terms of likely diagnosis, it is a contradiction for a disease to have a higher conditional probability P(H | E) and yet have a lower certainty factor, CF(H, E).

3: Dempster-Shafer Theory

Here we discuss another method for handling uncertainty. It is called Dempster-Shafer theory. It is evolved during the 1960s and 1970s through the efforts of Arthur Dempster and one of his students, Glenn Shafer.

This theory was designed as a mathematical theory of evidence.

The development of the theory has been motivated by the observation that probability theory is not able to distinguish between uncertainty and ignorance owing to incomplete information.

Frames of discernment

Given a set of possible elements, called environment,

( = {(1, (2, ..., (n}

that are mutually exclusive and exhaustive.

The environment is the set of objects that are of interest to us.

For example,

( = {airline, bomber, fighter} ( = {red, green, blue, orange, yellow}

One way of thinking about ( is in terms of questions and answers. Suppose

( = {airline, bomber, fighter}

and the questions is, “what are the military aircraft?”. The answer is the subset of (

{(2, (3} = {bomber, fighter}

Each subset of ( can be interpreted as a possible answer to a question.
Since the elements are mutually exclusive and the environment is exhaustive, there can be only one correct answer subset to a question.

Of course, not all possible questions may be meaningful.

The subsets of the environment are all possible valid answers in this universe of discourse.

An environment is also called a frame of discernment.

The term discern means that it is possible to distinguish the one correct answer from all the other possible answers to a question.

The power set of the environment (with 2N subsets for a set of size N) has as its elements all answers to the possible questions of the frame of discernment.

Mass Functions and Ignorance

In Bayesian theory, the posterior probability changes as evidence is acquired. Likewise in Dempster-Shafer theory, the belief in evidence may vary.

It is customary in Dempster-Shafer theory to think about the degree of belief in evidence as analogous to the mass of a physical object.

That is, the mass of evidence supports a belief.

The reason for the analogy with an object of mass is to consider belief as a quantity that can move around, be split up, and combined.

A fundamental difference between Dempster-Shafer theory and probability theory is the treatment of ignorance.

As discussed in Chapter 4, probability theory must distribute an equal amount of probability even in ignorance.

For example, if you have no prior knowledge, then you must assume the probability P of each possibility is

1 P = ---- N

where N is the number of possibilities.

E.g., The formula P(H) + P(H’) = 1 must be enforced

The Dempster-Shafer theory does not force belief to be assigned to ignorance or refutation of a hypothesis.

The mass is assigned only to those subsets of the environment to which you wish to assign belief.

Any belief that is not assigned to a specific subset is considered no belief or nonbelief and just associated with environment (.

Belief that refutes a hypothesis is disbelief, which is not nonbelief.

For example, we are trying to identify whether an aircraft is hostile. Suppose there is the evidence of 0.7 indicating a belief that the target aircraft is hostile, where hostile aircraft are only considered to be bombers and fighters. Thus, the mass assignment is to the subset {bomber, fighter), and

m1({bomber, fighter}) = 0.7

The rest of the belief is left with the environment, (, as nonbelief.

m1(() = 1- 0.7 = 0.3.

The Dempster-Shafer theory has a major difference with probability theory which would assume that

P(hostile) = 0.7 P(non-hostile) = 1 - 0.7 = 0.3

0.3 in Dempster-Shafer theory is held as nonbelief in the environment by m((). This means neither belief nor disbelief in the evidence to a degree of 0.3.

A mass has considerably more freedom than probabilities as show in the table below.

Dempster-Shafer theory Probability theory ------------------------------------------------------------------- m(() does not have to be 1 (Pi = 1 i If X ( Y, it is not necessary P(X) ( P(Y) that m(X) ( m(Y)

No required relationship P(X) + P(X’) = 1 between m(X) and m(X’)

We now state things more formally.

Let ( be a frame of discernment (environment). A mass assignment function assigns a number m(x) to each x ( ( such that: (1) 1 ( m(x) ( 0 (2) m(() = 0 (3) ( m(x) = 1 x((

Let ( be a frame of discernment (environment). and let m be a mass assignment function on (. A set x ( ( is called a focal element in m if m(x) > 0. The core of m, denoted by k(m), is the set of all focal elements in m.

Let us consider a medical example. Suppose

( = {heart-attack, pericarditis, pulmonary-embolism, aortic-dissection}.

Note that each mass assignment on ( assigns mass numbers to 24 = 16 sets. If for a specific patient there is no evidence pointing at a certain diagnosis in particular, the mass of 1 is assigned to (.

1 if x = ( m0(x) = 0 otherwise
Each proper subset of ( gets assigned the number 0. The core of m0 is equal to {(}

Now suppose that some evidence has become available that points to the composite hypothesis heart-attack or pericarditis with some certainty.

Then the subset {heart-attack, pericarditis} will be assigned a mass, e.g., 0.4. Due to lack of further information, the remaining certainty 0.6 is assigned to (. 0.6 if x = ( m1(x) = 0.4 if x = {heart-attack, pericarditis} 0 otherwise

Now suppose we have obtained some evidence against the hypothesis that our patient is suffering from pericarditis. This information can be considered as support for hypothesis that the patient is not suffering from pericarditis. This is equivalent to the composite hypothesis heart-attack or pulmonary-embolism or aortic-dissection. We therefore assign a mass, for example 0.7, to the set {heart-attack, pulmonary-embolism, aortic-dissection}

0.3 if x = ( m2(x) = 0.7 if x = {heart-attack, pulmonary-embolism, aortic-dissection } 0 otherwise
Combining evidence

Dempster-Shafer theory provides a function for computing from two pieces of evidence and their associated masses describing the combined influence of these pieces of evidence.

This function is known as Dempster’s rule of combination.

Let m1 and m2 be mass assignments on (, the frame of discernment. The combined mass is computed using the formula (special form of Dempster’s rule of Combination)

m1 ( m2(Z) = ( m1(X) m2(Y) X (Y= Z

For instance, using our hostile aircraft example, based on two pieces of evidence, we obtain

m2({B}) = 0.9 m2(() = 0.1 ------------------------------------------------------------- m1({B, F}) = 0.7 {B} 0.63 {B, F} 0.07

m1(() = 0.3 {B} 0.27 ( 0.03

E.g., the entry T11 is calculated as this

T11({B}) = m1({B, F}) m2({B}) = (0.7)(0.9)=0.63
Once the individual mass products have been calculated as shown above, then according to Dempster’s Rule the products over the common set of intersections are added

m3({B}) = m1 ( m2({B}) = 0.63 + 0.27 = 0.90 Bomber

m3({B, F}) = m1 ( m2({B, F }) = 0.07 Bomber or fighter

m3(() = m1 ( m2(() = 0.03 nonbelief

The m3({B}) represents the belief that the target is a bomber and only a bomber.

m3({B, F}) and m3(() imply more information as their sets include a bomber, it is plausible that their sums may contribute to a belief in the bomber.

So 0.07 + 0.03 = 0.1 may be added to the belief of 0.9 in the bomber set to yield the maximum belief (= 1) that it could be a bomber. This is called the plausible belief.

We have two belief values for the bomber, 0.9 and 1. This pair represents a range of belief. It is called an evidential interval.

The lower bound is called the support (Spt) or Bel, and the upper bound is called plausibility (PIs).

For instance, 0.9 is the lower bound in the above example, and 1 is the upper bound.

The support is the minimum belief based on the evidence, while the plausibility is the maximum belief we are willing to give.

Thus, 0 ( Bel ( PIs ( 1. Table below shows some common evidential interval.

Evidential Interval Meaning ------------------------------------------------------------------------ [1, 1] Completely true

[0, 0] Completely false

[0, 1] Completely ignorant

[Bel, 1] where 0 < Bel < 1 here Tends to support

[0, PIs] where 0 < PIs < 1 here Tends to refute

[Bel, PIs] where 0 < Bel (PIs < 1 here Tends to both support and refute

The Bel (belief function, or support) is defined to be the total belief of a set and all its subsets.

Bel(X) = ( m(Y) Y( X For example,

Bel1({B, F}) = m1({B, F}) + m1({B}) + m1({F}) = 0.7 + 0 + 0 = 0.7

Belief function is different from the mass, which is the belief in the evidence assigned to a single set.
Since belief functions are defined in terms of masses, the combination of two belief functions also can be expressed in terms of masses of a set and all its subsets.
For example: Bel1 ( Bel2({B}) = m1(m2({B}) + m1( m2(() = 0.90 + 0 = 0.9.

Bel1 ( Bel2({B, F}) = m1(m2({B, F}) + m1(m2({B}) + m1( m2({F}) = 0.07 + 0.9 + 0 = 0.97

Bel1 ( Bel2(() = m1( m2(() + m1(m2({B, F}) + m1(m2({B}) = 0.03 + 0.07 + 0.9 = 1
Bel(() = 1 in all cases since the sum of masses must always equal 1.

The evidential interval of a set S, EI(S), may be defined in terms of the belief.

EI(S) = [Bel(S), 1 - Bel(S’)]

For instance, if S = {B}, then S’ = {A, F} and

Bel({A, F}) = m1(m2({A, F}) + m1(m2({A}) + m1( m2({F}) = 0 + 0 + 0 = 0

since these are not focal elements and the mass is 0 for nonfocal elements.

Thus, EI({B}) = [0.90, 1 - 0] = [0.9, 1]

Also, since Bel({A}) = 0, and

Bel({B, F}) = Bel1 ( Bel2({B, F}) = 0.97.

Then EI({B, F}) = [0.97, 1 - 0] = [0.97, 1]

EI({A}) = [0, 0.03]

The plausibility is defined as the degree to which the evidence fails to refute X

PIs(X) = 1 - Bel(X’)

Thus, EI(X) = [Bel(X), PIs(X)]

The evidential interval [total belief, plausibility] can be expressed,

[evidence for support, evidence for support + ignorance]

The dubiety (Dbt) or doubt represents the degree to which X is disbelieved or refuted.

The ignorance (Igr) is the degree to which the mass supports X and X’.

These are defined as follows:

Dbt(X) = Bel(X’) = 1 - PIs(X)

Igr(X) = PIs(X) - Bel(X)

The Normalization of Belief

Let us see an example. Suppose a third evidence now reports conflicting evidence of an airliner

m3({A}) = 0.95, m3(() = 0.05

The table shows how the cross products are calculated.

m1(m2({B}) m1(m2({B, F}) m1(m2(() 0.90 0.07 0.03 ---------------------------------------------------------------------------------- m3({A}) = 0.95 ( 0.855 ( 0.0665 {A} 0.0285

m3(() = 0.05 {B} 0.045 {B, F} 0.0035 ( 0.0015

Thus

m1(m2(m3({A}) = 0.0285 m1(m2(m3({B}) = 0.045 m1(m2(m3({B, F}) = 0.0035 m1(m2(m3(() = 0.0015 m1(m2(m3(() = 0

Note that for this example, the sum of all the masses is less than 1

( m1(m2(m3(X) = 0.0285 + 0.045 + 0.0035 + 0.0015 = 0.0785
However a sum of 1 is required because the combined evidence m1(m2(m3, is a valid mass and the sum over all focal elements must be 1.

This is a problem.
The solution to this problem is a normalization of the focal elements by dividing each focal element by 1 - ( where ( is defined for any sets X and Y as ( = ( m1(X)m2(Y) X (Y = (

For our problem, ( = 0.855 + 0.0665 = 0.9215

Dividing each m1(m2(m3 focal element by 1 - (

m1(m2(m3({A}) = 0.363 m1(m2(m3({B}) = 0.573 m1(m2(m3({B, F}) = 0.045 m1(m2(m3(() = 0.019

The one evidence of {A} has considerably reduced the belief in {B}.
The total normalized belief in {B} is now

Bel({B}) = m1(m2(m3({B}) = 0.573

Bel({B}’) = Bel({A, F}) = m1(m2(m3({A, F}) + m1(m2(m3({A}) + m1(m2(m3({F}) = 0 + 0.363 + 0 = 0.363

and so the evidential interval is now

EI({B}) = [Bel({B}), 1 - Bel({B}’)] = [0.573, 1 - 0.363] = [0.573, 0.637]

The general form of Dempster’s Rule of Combination is, ( m1(X) m2(Y) X (Y = Z m1(m2(Z) = -------------------------- 1 - (

Note that ( = 1 is undefined.

Difficulty with the Dempster-Shafer theory

One difficulty with the Dempster-Shafer theory occurs with normalization and may lead to results which are contrary to our expectation.

170. The problem is that it ignores the belief that the object being considered does not exist.

For example, the beliefs by two doctors, A and B, in a patient’s illness are as follows

mA(meningitis) = 0.99, mA(brain tumor) = 0.01 mB(concussion) = 0.99, mB(brain tumor) = 0.01

Both doctors think there is a very low chance, 0.01, of a brain tumor but greatly disagree on the major problem.

The Dempster rule of combination gives a combined belief of 1 in the brain tumor. The result is very unexpected.

-----------------------
AND OR NOT
(min) (max) (-)

AND OR NOT
(min) (max) (-)

Rule 1 Rule 2

CF2(H, e) = CF2(E, e)CF2(H,e)

CF1(H, e) = CF1(E, e)CF1(H,e)

Hypothesis, H

Probabilities
Prior
Subjective Opinion of site: P(Hi)

Conditional
Seismic test result
P(E|Hi)

Joint: P(E(H)
=P(E|Hi)P(Hi)

No oil oil
P(O’) = 0.4 P(O)=0.6

– test + test – test + test
P(-|O’) P(+|O’) P(-|O) P(+|O)
=0.9 = 0.1 = 0.2 = 0.8

P(-(O’) P(+(O’) P(-(O) P(+(O)
=0.36 =0.04 =0.12 =0.48

Initial probability tree for oil exploration

Revised probability tree for oil exploration

P(-(O’) P(-(O) P(+(O’) P(+(O)
=0.36 =0.12 =0.04 =0.48

No oil oil No oil oil
P(O’|-) P(O|-) P(O’|+) P(O|+)
=3/4 = 1/4 = 1/13 = 12/13

– test + test
P(-) = 0.48 P(+)=0.52

Probabilities unconditional P(E)

Posterior
Of site:
P(Hi|E) = P(E|Hi)P(Hi)/P(E)

Joint: P(E(H)
=P(Hi|E)P(E)

Initial Bayesian Decision tree for oil exploration

-$50,000 -$50,000 -$1,000,000 $1,000,000 -$1,000,000 $1,000,000

Quit Drill Quit Drill

P(-) = 0.48 P(+)=0.52

Event
Test result
+ or -

Act.
Quit or drill

Event
Oil or no oil

payoff

No oil oil No oil oil
P(O’|-) P(O|-) P(O’|+) P(O|+)
=3/4 = 1/4 = 1/13 = 12/13

A

C

B

D

E

E

D

B

C

A

No oil oil No oil oil
P(O’|-) P(O|-) P(O’|+) P(O|+)
=3/4 = 1/4 = 1/13 = 12/13

Complete Bayesian Decision tree for oil exploration
Using backward induction

-$50,000 -$50,000 -$1,000,000 $1,000,000 -$1,000,000 $1,000,000

Quit Drill Quit Drill

P(-) = 0.48 P(+)=0.52

Event
Test result
+ or -

Act.
Quit or drill

Event
Oil or no oil

payoff

$416,000

-$50,000 $846,153…...

Similar Documents

Premium Essay

A Decision of Uncertainty

... A Decision of Uncertainty Latanya Franklin QNT561 September 17, 2012 Richard LA Valley A Decision of Uncertainty To reduce prices or increase Marketing budget in order to capture market share Sony Incorporated wants to make some changes within the business that in hopes will capture the market share of the industry. The only way to do this is by changing the pricing policy and strategy for the company. The first approach would be to create a development stage that will allow the managers to start setting prices this will allow the company to avoid releasing products or services that has no ability of sustaining profitable prices in the market. By changing the company pricing policies and strategies the company has a greater chance of receiving higher profits including an increase in the company market shares. With changes comes risks and uncertainty of making such changes. The company has to include other factors that include the reaction of the competitors. If the company makes the decision to change its price policies and strategies how will this change impact the company against their competitors by reducing prices. Questions of concern is will the competitors see this a threat and change their prices in response and if so what type of change will they make that will put the company at risk of capturing the market share. The decision that relies on the company here is rather to continue with the price policy and strategy which risks the chance of competitors taking......

Words: 1104 - Pages: 5

Free Essay

Decision of Uncertainty Paper

...A Decision of Uncertainty A Decision of Uncertainty Paper There are decisions that people make where the outcome is presumably known and, there are the decisions that people make where the results are unknown. The latter part of the aforementioned statement is also known as decisions of uncertainty. To make these choices with more confidence, we will explore concepts that will formulate these judgments. We also have to include appropriate probability concepts that will help limit uncertainty in certain decisions. This paper will disclose the decision to reside in the tri-state area with the probability of destructive hurricanes occurring. Next this paper will reveal concepts and the outcome from the statistical analysis that was used to determine the final decision and, the tradeoffs between accuracy and precision required by various probability concepts. As a final point, this paper will demonstrate the effects the decision had on the data provided and the decision that was ultimately made. Probability Concepts and Application Hurricane Sandy hit Atlantic City, New Jersey on October 29, 2012. A 900-mile wide storm, Sandy affected the entire northeastern United States with devastating winds, rain and floods. New Jersey and New York suffered the worst from the super storm leaving thousands of people without power for days. People living in this area were not prepared or expected the storm to devastate the area as it did leaving homes and personal...

Words: 962 - Pages: 4

Premium Essay

Uncertainty Paper

...Business Decision of Uncertainty: Free Flu Vaccinations QNT/561 - Applied Business Research and Statistics Ken LeCour, M.B.A. & M.S.Q.A. January 14, 2013 Business Decision of Uncertainty: Free Flu Vaccinations As an employee of United Parcel Service, (UPS) I had some attractive employee benefits: stock options, 401k, personal and vacation days, education reimbursement, health care insurance with low co-pays, and expense allowances for management employees. In addition, the company would supply Christmas turkeys and many times a small bonus. Besides these charitable benefits, the company offered free flu vaccination to any employee who desired one. UPS promoted the free vaccination by handing-out coupons for flu shots that was administered at the local pharmacy, hospital or med center. The benefit of developing healthy workforce is two-fold offering a more productive work environment and limiting absenteeism which is associated with the flu. The flu vaccination potentially can provide a tremendous return on investment. So using the cost on return allowed me to investigate and compare the benefit to cost. ...

Words: 1003 - Pages: 5

Premium Essay

Decisions of Uncertainty

...on the decision-makers. The University of Phoenix Medical Center is in the process of building an addition that will have twelve new operating rooms replacing an existing eight room surgical suite. The existing surgical suite has eight surgical tables that will be replaced and the new surgery suite will have twelve new tables. Four manufacturers of surgery tables have been brought in for evaluation. They each had two weeks of trials and surveys were filled out by surgery employees. In order for the healthcare facility to make a good decision there is a “need to calculate the probability of something occurring or not occurring, make a judgment, consider alternatives and choose an action” (Duggan, n.d., para. 1). The event of the uncertainty is which manufacturer’s tables to choose to buy. If all things are equal the probability of picking the best table is one in four or 25%. This 25% probability can be changed through the trial of the beds in use, the survey of the hospital, previous research and history of each manufacturer, and the history of table use within the facility. Using the survey a confidence interval for each manufacturer’s table can be obtained. Using the confidence interval will in turn help calculate the confidence level of the product in the eyes of the staff. Not only can the staff survey help lead to a high confidence level but research on the products by outside organizations can also help with maker a better probable decision. Point estimate......

Words: 1032 - Pages: 5

Premium Essay

Managing Uncertainty

...BAD 64041 Eric Krizay Citation Thun, Druke, Hoenig (2011). Managing uncertainty, an empirical analysis of supply chain risk management in small and medium sized enterprises. International Journal of Production Research, Vol. 49, No. 18, 15 September 2011, 5511 – 5525. Research Classification In the past years, a fairly new research area has emerged on the supply chain management scene and has gained considerable attention from both academics and practitioners: Supply chain risk management. Thun, Druke, and Hoenig set out to empirically investigate, supply chain risk management in small to medium sized companies. By analyzing data from 67 various German automotive manufacturing plants, the authors seek to identify the key drivers of supply chain risk and look at the various methods or instruments of supply chain risk management to determine their suitability by comparing those smaller enterprises with the larger companies. The results are visualized in the probability-impact-matrix distinguishing between internal and external supply chain risks. Statement of Problem The authors sought to answer two main questions: What are key drivers of supply chain risks, and what is their likelihood to occur and their potential impact on the supply chain? and, what are the instruments for dealing with supply chain risks and their impact on performance? These questions gave way to three Hypothesis: H1. Small to medium-sized enterprises regard their supply chain as more vulnerable......

Words: 778 - Pages: 4

Free Essay

Decision Uncertainty

...Decision of Uncertainty Darylisha Jones QNT/561 February 14, 2011 Paul Thomasman Decision of Uncertainty Introduction Decision: Extending automobile warranties or not? It comes a time when one has to make that decision of extending an automobile warranty when it has expired. Because auto warranties provide well needed protection, it does not come cheap be any means. Therefore, the decision to be made is the price of purchasing a warranty over the cost of repairs without a warranty. An extended warranty supplies the ability of possessing coverage of an automobile. It supplies coverage for repairs, parts, rentals, and even labor at a warranty rate rather paying out- of -pocket for every issue. Research In order to make the correct decision, I will research information on purchasing extended warranty of a 2005 Chevrolet Impala. In one year my warranty of my vehicle will expire, and I will have to decide on purchasing an extended warranty to protect my vehicle. After researching the effects of not extending a warranty can result in high auto repair bills. An average cost of an engine repair without protection for an Impala is $2,000. Therefore, information of whether or not major repairs are needed for this vehicle must be taken into consideration. After gaining information from many auto repair shops of their experience of servicing vehicles, it is wise to acquire extending a warranty. The chances of auto repairs being needed within five years on my......

Words: 902 - Pages: 4

Premium Essay

Managing Uncertainty in a Supply Chain

...inventory? 1 WE  IE  Managing economics of scale in a SC  Cycle inventory (Ch 11)  Managing uncertainty in a SC  Safety inventory and risk polling (Ch 12, 13) 2 WE  IE 1 2014/11/20 Managing Uncertainty in a Supply Chain — Safety Inventory Chapter 12 Wen‐Chih Chen Dept. of Industrial Engineering & Management National Chiao Tung University, TAIWAN 3 WE  IE Safety Inventory  Safety inventory is carried to satisfy possible demand that exceeds the amount forecasted. 4 WE  IE 2 2014/11/20 Determining the Safety Stock Level Safety inventory uncertainty uncertainty Supply Demand Performance: availability (responsiveness), costs (efficiency) 5 WE  IE Key Decisions    What is the appropriate level of product availability? How much safety inventory is needed for the desired level of product availability? What actions can be taken to improve product availability while reducing safety inventory? 6 WE  IE 3 2014/11/20 Measuring Product Availability  Product fill rate (fr)  % of product demand satisfied from product in inventory % of orders filled from available inventory % of replenishment cycles that end with ALL customer demand being met  Order fill rate   Cycle service level (CSL)  7 WE  IE Measuring Demand Uncertainty   : avg. demand @ period i : standard deviation (標準差) of demand (forecast error) @ period i......

Words: 1560 - Pages: 7

Premium Essay

Decision of Uncertainty

...departments causing delay in care. As possible solution to this problem for Saint Francis Medical Center in Peoria, Illinois is the development of a Pediatric Afterhours Clinic. The Pediatric Afterhours Clinic would be available Monday through Friday 5pm to 10pm, and Saturday, Sunday, and holidays 2pm to 10pm. The decision needs to be made by administration whether or not to opening the clinic would be beneficial to the emergency department as well as financially viable. In this paper I will be applying appropriate probability concepts to find resulting data to limit the uncertainty in this decision with rationale, identifying each discrete outcome from the statistical analysis, identifying tradeoffs between accuracy and precision, and will include the recommendation for the decision to be made. a. Include appropriate probability concepts and your application of them to find resulting data to limit the uncertainty in this decision. The probability concept that will be used to make the decision to open a Pediatric Afterhours Clinic is the Poisson distribution. Administration wants to be 90% certain that opening the clinic will alleviate congestion in the emergency department (ED). On average 20 pediatric patients are seen in the ED between the hours of 5pm and 10pm; of those 20 patients, an average of three patients are considered non emergent between the hour of 9pm and 10pm. The probability of these individuals requiring non emergent care between 9pm and 10pm......

Words: 620 - Pages: 3

Free Essay

Uncertainty Avoidance

...sophisticated technologies and reducing the work force would lead to a more acceptable level of profit. Paul being fully aware that the Chinese labors are being paid a fraction of what the labors in America are paid. On the other hand, according to Chui Wai, the joint venture was also fulling his expectations, he felt a great sense of accomplishment by providing job opportunities for over 3000 people. This tells us Paul Denver is being both highly assertive and individualistic person. However, Chui Wai is a total opposite of him, especially having an Asian background who are always described as less assertive and part of the collective society. Uncertainty avoidance is a value characterized by people’s intolerance for uncertainty and ambiguity and resulting support for beliefs that promise certainty and conformity. Chiu Wai relatively shows high uncertainty avoidance characteristics. He was satisfied with the fact of earning 5% annual return on investment as it guaranteed him to be on the safe side since it was not too little neither was it too high which simply reduces the risks of being targeted by the local authorities. A clear distinction could be perceived from this scenario of how Asian culture differs from Western Culture. Despite that, in today’s environment it is always ideal for managers to go global yet keep the local culture and ethics in mind to keep moving forward in this highly competitive business environment....

Words: 545 - Pages: 3

Premium Essay

Amelie and Uncertainty Avoidance

...While it is difficult to tell whether Amelie’s behavior comes simply from the French culture, or more so from her unorthodox upbringing, it is obvious at the beginning of the movie that she is not fond of ambiguous situations. This makes perfect sense as the French are typically known for having high uncertainty avoidance in there everyday lives. In fact, they are ranked 10th highest in level of avoidance according to Hofstede’s Dimensions of Culture. Amelie seems to live in her own comfortable world, typically drifting towards solitude and regularity. However, things soon begin to change when she finds a small toy box hidden long ago by a young boy. Amelie’s mission to find this boy is a step outside of her comfort zone. She has cast herself into an uncertain situation that once the outcome is realized, will totally shift the way Amelie lives her life. Amelie does find the man who hid the box and feels that she has done something worthwhile, causing her to become a “do-gooder”. Though she makes this decision to try and help people in their daily lives, Amelie takes many actions to avoid ambiguous situations. To start, when Amelie finally does track down the man she avoids actually confronting or conversing with him. Instead, she attracts his attention by calling a phone booth as he walks by. When the man answers the phone she quickly hangs up, but she has left the box in the phone booth for the man to find. Another occurrence of avoiding a situation actually comes......

Words: 449 - Pages: 2

Premium Essay

Microeconomics - Uncertainty

...“If there's one thing that's certain in business, it's uncertainty.” Stephen Covey. Uncertainty is the lack of information, which makes the probabilities of a defined outcome unknown. Unfortunately, running a business primarily depends on planning for a set of known outcomes. Businesses gather available present and past information to formulate a futuristic prediction. However, predictions, no matter how well-considered they are, cannot account for all variables in business. Therefore, businesses generate a set of probabilities for variable future changes and make decisions based on those probabilities. This is known as risk management. There are a lot of sources of uncertainty such as governmental regulations, disasters affecting markets, or even scientific and technological discoveries that change supply or demand. These uncertainties represent great obstacles for businesses because they might lead to poor decisions, which may result in great losses. Managing uncertain changes is tricky. When uncertainty builds, businesses must learn to adapt; otherwise they will sink. In the following paragraphs we discuss two instances where businesses have been affected by outside factors, which have created uncertainty, and how those companies adapted to continue doing business in uncertain times. One of our team members works in the insurance industry at Campus Benefits, a private broker working in the educational market. In 2010, the 111th Congress passed healthcare legislation,......

Words: 1818 - Pages: 8

Free Essay

Uncertainty

...can be very confident that they do not currently have the disease. Those who do not pass also do not likely have colorectal cancer, but only these patients should be subjected to more costly and more invasive tests such as colonoscopy. In short, using Bayes’ theorem to analyze these probabilities will allow us to greatly reduce the number of costly an exploratory procedures preformed to screen for colorectal cancer. This can reduce the costs for a hospital, insurance company, and patient. It can also reduce the number and complexity of treatments that patients who are exhibiting no symptoms must undergo as a part of their preventative health care. Read more: 1. Decision of Uncertainty Paper Research statistical data - JustAnswer http://www.justanswer.com/writing/2c3kg-1-decision-uncertainty-paper-research-statistical-data.html#ixzz1jrTMZLdn...

Words: 773 - Pages: 4

Free Essay

Decisions of Uncertainty

...Decision of Uncertainty Paper QNT/561 Applied Business Research and Statistics November 23, 2011 Decision of Uncertainty Paper August of 2006 was memorable month for every citizen of El Paso area. That year record-breaking rainfall, which was twice the average annual destroyed more than 300 homes and caused 100 million dollars in damages. For days after the flood water subsided the area was still under threat of flooding because of the substandard drainage system and outdated levy system. This incident forced many El Pasoans to evaluate the possibility of the future flooding an estimate the cost of purchasing the flood insurance. According to the study performed by the National Weather Service Forecast Office Deep convection, which produces excessive rainfall and flash flooding, poses a threat to lives and property over south-central and southwestern New Mexico and far western Texas, primarily during the summer monsoon season. Forecasting these phenomena are difficult across this region due to the irregular terrain, the sparse data and the relatively poor performance of numerical models in the prediction of heavy rain across the southwestern United States (Rogash, 2003). Therefore, we can safely assume the2006 flooding in El Paso area was a combination of independent variables such as topography and existing drainage system and independent variables such as weather patterns and rainfall. Some such as city representative Eddie Holguin called it a 500-year......

Words: 861 - Pages: 4

Premium Essay

Decision of Uncertainty

...the appropriate situation. Hypothesis testing is most effective for a scenario with an observed difference such as when a given hypothesis is true. In the rental insurance scenario, hypothesis testing can be used if the probability of different variables is defined. For example; accidents that took place over the specific course to be travelled between Dallas, TX - Phoenix, AZ and back, with such factors as expected weather conditions, and ratio of day time compared to night time driving. As these sorts of variables are not defined, it is more effective to use Bayes' theorem. Probability Concepts for Limiting Uncertainty Research has indicated that there is a one in sixteen chance of having an accident. This equates to a 6.25 % chance of having an accident. The next step is to set up the variables for the purpose to examine the probability that will reduce the uncertainty of decision making. The information listed below will be used in the formula to assist me determining whether to purchase the vendor insurance or waive the insurance. Accept Insurance =AI = Purchase Rental Insurance Coverage Decline Insurance = DI = Waive Rental Insurance Coverage • The probability of having an accident and therefore using the rental insurance if purchased is 6.25%; (P (AI) =Accept Insurance = .0625. • Therefore, the probability of not having an accident and as a result, not using the rental insurance is 93.75%; (P (DI) = Decline Insurance = .9375 The probability of an accident......

Words: 1011 - Pages: 5

Premium Essay

Decision of Uncertainty

...23, 2012 Subhendu Roy In the dynamic society knowledge is always incomplete, yet a decision must be made. “Decision making is a process of first diverging to explore the possibilities and then converging on a solution(s)" (Cooper & Schindler, 2011). Many decisions are made under uncertainty; that is, with limited information about their potential consequences. The outcome can vary greatly. Most of time uncertainty exists whenever people determine a decision on a daily basis. A decision to buy rental car insurance will be answered by using the concept of probability. This paper will focus on the application of various probabilities to formulate the decision under uncertainty. Discreet outcome from statistical analysis as well as trade-offs between accuracy and precision obtained by different probabilities concepts shall be evaluated. According to car accident statistic stats, auto, fatal, and drunk driving, the estimation of having an accident is of one in 16 cars. It has provided useful information to make important decision. There are a number of probability concepts that can be used in determining the results from the research data that was given. Probability is used to limit the uncertainty of the decision on whether to buy the rental car insurance. The probability concept that works the best and meets all of the criteria from the information that was gathered is the Bayes’ Theorem. The application of Bayes' theorem helps to interpret the data because it is most......

Words: 974 - Pages: 4