Showing posts with label Quantitative Analysis for Managerial Applications. Show all posts
Showing posts with label Quantitative Analysis for Managerial Applications. Show all posts

Saturday, 23 April 2022

Question No. 5 - MMPC-005 - Quantitative Analysis for Managerial Applications - MBA and MBA (Banking & Finance)

Solutions to Assignments

                            MBA and MBA (Banking & Finance)

                    MMPC-005 - Quantitative Analysis for  Managerial 

                                                Applications


Question No. 5. 

Write the short note on any three of the following:- 

(a) Mathematical Property of Median 

The value which occupies the centre position amongst the observations when arranged in ascending or descending order is the median. Fifty per cent scores are above or below the median. Hence, it is named as 50th percentile or positional average. The location of the median is dependent on whether the data set consists of an even or odd number of values. The method of finding the median is different for even and an odd number of observations.

Median Properties

In statistics, the properties of the median are explained in the following points.

  • Median is not dependent on all the data values in a dataset.
  • The median value is fixed by its position and is not reflected by the individual value.
  • The distance between the median and the rest of the values is less than the distance from any other point.
  • Every array has a single median.
  • Median cannot be manipulated algebraically. It cannot be weighed and combined.
  • In a grouping procedure, the median is stable.
  • Median is not applicable to qualitative data.
  • The values must be grouped and ordered for computation.
  • Median can be determined for ratio, interval and ordinal scale.
  • Outliers and skewed data have less impact on the median.
  • If the distribution is skewed, the median is a better measure when compared to mean.

Formula to Find Median for Discrete Series

Calculating the median for individual series is as follows:

  • The data is arranged in ascending or descending order.
  • If it is an odd-sized sample, median = value of ([n + 1] / 2)th item.
  • If it is an even-sized sample, median = ½ [ value of (n / 2)th item + value of ([n / 2] + 1)th item]

Calculating the median for discrete series is as follows:

  • Arrange the data in ascending or descending order.
  • The cumulative frequencies need to be computed.
  • Median = (n / 2)th term, n refers to cumulative frequency.

The formula for finding the median for a continuous distribution is:

Median for Discrete Series Formula

Where l = lower limit of the median class

f = frequency of the median class

N = the sum of all frequencies

i = the width of the median class

C = the cumulative frequency of the class preceding the median class



(b) Decision Tree Approach 

A decision tree is a support tool with a tree-like structure that models probable outcomes, cost of resources, utilities, and possible consequences. Decision trees provide a way to present algorithms with conditional control statements. They include branches that represent decision-making steps that can lead to a favorable result.

The flowchart structure includes internal nodes that represent tests or attributes at each stage. Every branch stands for an outcome for the attributes, while the path from the leaf to the root represents rules for classification.

Decision trees are one of the best forms of learning algorithms based on various learning methods. They boost predictive models with accuracy, ease in interpretation, and stability. The tools are also effective in fitting non-linear relationships since they can solve data-fitting challenges, such as regression and classifications.

Applications of Decision Trees
 

1. Assessing prospective growth opportunities
One of the applications of decision trees involves evaluating prospective growth opportunities for businesses based on historical data. Historical data on sales can be used in decision trees that may lead to making radical changes in the strategy of a business to help aid expansion and growth.

 

2. Using demographic data to find prospective clients
Another application of decision trees is in the use of demographic data to find prospective clients. They can help streamline a marketing budget and make informed decisions on the target market that the business is focused on. In the absence of decision trees, the business may spend its marketing market without a specific demographic in mind, which will affect its overall revenues.

 

3. Serving as a support tool in several fields
Lenders also use decision trees to predict the probability of a customer defaulting on a loan by applying predictive model generation using the client’s past data. The use of a decision tree support tool can help lenders evaluate a customer’s creditworthiness to prevent losses.

Decision trees can also be used in operations research in planning logistics and strategic management. They can help in determining appropriate strategies that will help a company achieve its intended goals. Other fields where decision trees can be applied include engineering, education, law, business, healthcare, and finance.


(c) Stratified vs. Cluster Sampling 

In statistics, two of the most common methods used to obtain samples from a population are cluster sampling and stratified sampling.

This tutorial provides a brief explanation of both sampling methods along with the similarities and differences between them.

Cluster Sampling
Cluster sampling is a type of sampling method in which we split a population into clusters, then randomly select some of the clusters and include all members from those clusters in the sample.

For example, suppose a company that gives whale-watching tours wants to survey its customers. Out of ten tours they give one day, they randomly select four tours and ask every customer about their experience.

Stratified Sampling
Stratified sampling is a type of sampling method in which we split a population into groups, then randomly select some members from each group to be in the sample.

For example, suppose a high school principal wants to conduct a survey to collect the opinions of students. He first splits the students into four stratums based on their grade – Freshman, Sophomore, Junior, and Senior – then selects a simple random sample of 50 students from each grade to be included in the survey.

Cluster sampling and stratified sampling share the following similarities:

- Both methods are examples of probability sampling methods – every member in the population has an equal probability of being selected to be in the sample.
- Both methods divide a population into distinct groups (either clusters or stratums).
- Both methods tend to be quicker and more cost-effective ways of obtaining a sample from a population compared to a simple random sample.

Cluster sampling and stratified sampling share the following differences:

- Cluster sampling divides a population into groups, then includes all members of some randomly chosen groups.
- Stratified sampling divides a population into groups, then includes some members of all of the groups.

(d) Pearson’s Product Moment Correlation Coefficient

The Pearson product-moment correlation coefficient (or Pearson correlation coefficient, for short) is a measure of the strength of a linear association between two variables and is denoted by r. Basically, a Pearson product-moment correlation attempts to draw a line of best fit through the data of two variables, and the Pearson correlation coefficient, r, indicates how far away all these data points are to this line of best fit (i.e., how well the data points fit this new model/line of best fit).

The Pearson correlation coefficient, r, can take a range of values from +1 to -1. A value of 0 indicates that there is no association between the two variables. A value greater than 0 indicates a positive association; that is, as the value of one variable increases, so does the value of the other variable. A value less than 0 indicates a negative association; that is, as the value of one variable increases, the value of the other variable decreases. This is shown in the diagram below:

Pearson Coefficient - Different Values

The stronger the association of the two variables, the closer the Pearson correlation coefficient, r, will be to either +1 or -1 depending on whether the relationship is positive or negative, respectively. Achieving a value of +1 or -1 means that all your data points are included on the line of best fit – there are no data points that show any variation away from this line. Values for r between +1 and -1 (for example, r = 0.8 or -0.4) indicate that there is variation around the line of best fit. The closer the value of r to 0 the greater the variation around the line of best fit. Different relationships and their correlation coefficients are shown in the diagram below:

Different values for the Pearson Correlation Coefficient


Question No. 4 - MMPC-005 - Quantitative Analysis for Managerial Applications

Solutions to Assignments

                            MBA and MBA (Banking & Finance)

                    MMPC-005 - Quantitative Analysis for  Managerial 

                                                Applications

Question No. 4. 

“Time series analysis is one of the most powerful methods in use, especially for short-term forecasting purposes.” Comment on the statement. 

Time series analysis is one of the most powerful methods in use, especially for short term forecasting purposes. From the historical data one attempts to obtain the underlying pattern so that a suitable model of the process can be developed, which is then used for purposes of forecasting or studying the internal structure of the process as a whole. We have already seen in Unit 17 that a variety of methods such as subjective methods, moving averages and exponential smoothing, regression methods, causal models and time-series analysis are available for forecasting. Time series analysis looks for the dependence between values in a time series (a set of values recorded at equal time intervals) with a view to accurately identify the underlying pattern of the data. In the case of quantitative methods of forecasting, each technique makes explicit assumptions about the underlying pattern. 

For instance, in using regression models we had first to make a guess on whether a linear or parabolic model should be chosen and only then could we proceed with the estimation of parameters and model development. We could rely on mere visual inspection of the data or its graphical plot to make the best choice of the underlying model. However, such guess work, through not uncommon, is unlikely to yield very accurate or reliable results. In time series analysis, a systematic attempt is made to identify and isolate different kinds of patterns in the data. The four kinds of patterns that are most frequently encountered are horizontal, non-stationary (trend or growth), seasonal and cyclical. Generally, a random or noise component is also superimposed. We shall first examine the method of decomposition wherein a model of the time series in terms of these patterns can be developed. This can then be used for forecasting purposes as illustrated through an example. 

Finally the question of the choice of a forecasting method is taken up. Characteristics of various methods are summarised along with likely situations where these may be applied. Of course, considerations of cost and accuracy desired in the forecast play a very important role in the choice.




 

Monday, 18 April 2022

Question No. 2 - MMPC-005 - Quantitative Analysis for Managerial Applications - MBA and MBA (Banking & Finance)

Solutions to Assignments

                            MBA and MBA (Banking & Finance)

                    MMPC-005 - Quantitative Analysis for  Managerial 

                                                Applications

Question No. 2. 

Explain the concept of probability theory. Also, explain what are the different approaches to probability theory.    

probability theory, a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance.

The word probability has several meanings in ordinary conversation. Two of these are particularly important for the development and applications of the mathematical theory of probability. One is the interpretation of probabilities as relative frequencies, for which simple games involving coins, cards, dice, and roulette wheels provide examples. The distinctive feature of games of chance is that the outcome of a given trial cannot be predicted with certainty, although the collective results of a large number of trials display some regularity. For example, the statement that the probability of “heads” in tossing a coin equals one-half, according to the relative frequency interpretation, implies that in a large number of tosses the relative frequency with which “heads” actually occurs will be approximately one-half, although it contains no implication concerning the outcome of any given toss. There are many similar examples involving groups of people, molecules of a gas, genes, and so on. Actuarial statements about the life expectancy for persons of a certain age describe the collective experience of a large number of individuals but do not purport to say what will happen to any particular person. Similarly, predictions about the chance of a genetic disease occurring in a child of parents having a known genetic makeup are statements about relative frequencies of occurrence in a large number of cases but are not predictions about a given individual.

The fundamental ingredient of probability theory is an experiment that can be repeated, at least hypothetically, under essentially identical conditions and that may lead to different outcomes on different trials. The set of all possible outcomes of an experiment is called a “sample space.” The experiment of tossing a coin once results in a sample space with two possible outcomes, “heads” and “tails.” Tossing two dice has a sample space with 36 possible outcomes, each of which can be identified with an ordered pair (i, j), where i and j assume one of the values 1, 2, 3, 4, 5, 6 and denote the faces showing on the individual dice. It is important to think of the dice as identifiable (say by a difference in colour), so that the outcome (1, 2) is different from (2, 1). An “event” is a well-defined subset of the sample space. For example, the event “the sum of the faces showing on the two dice equals six” consists of the five outcomes (1, 5), (2, 4), (3, 3), (4, 2), and (5, 1).

A third example is to draw n balls from an urn containing balls of various colours. A generic outcome to this experiment is an n-tuple, where the ith entry specifies the colour of the ball obtained on the ith draw (i = 1, 2,…, n). In spite of the simplicity of this experiment, a thorough understanding gives the theoretical basis for opinion polls and sample surveys. For example, individuals in a population favouring a particular candidate in an election may be identified with balls of a particular colour, those favouring a different candidate may be identified with a different colour, and so on. Probability theory provides the basis for learning about the contents of the urn from the sample of balls drawn from the urn; an application is to learn about the electoral preferences of a population on the basis of a sample drawn from that population.

Another application of simple urn models is to use clinical trials designed to determine whether a new treatment for a disease, a new drug, or a new surgical procedure is better than a standard treatment. In the simple case in which treatment can be regarded as either success or failure, the goal of the clinical trial is to discover whether the new treatment more frequently leads to success than does the standard treatment. Patients with the disease can be identified with balls in an urn. The red balls are those patients who are cured by the new treatment, and the black balls are those not cured. Usually there is a control group, who receive the standard treatment. They are represented by a second urn with a possibly different fraction of red balls. The goal of the experiment of drawing some number of balls from each urn is to discover on the basis of the sample which urn has the larger fraction of red balls. A variation of this idea can be used to test the efficacy of a new vaccine. Perhaps the largest and most famous example was the test of the Salk vaccine for poliomyelitis conducted in 1954. It was organized by the U.S. Public Health Service and involved almost two million children. Its success has led to the almost complete elimination of polio as a health problem in the industrialized parts of the world. Strictly speaking, these applications are problems of statistics, for which the foundations are provided by probability theory.

In contrast to the experiments described above, many experiments have infinitely many possible outcomes. For example, one can toss a coin until “heads” appears for the first time. The number of possible tosses is n = 1, 2,…. Another example is to twirl a spinner. For an idealized spinner made from a straight line segment having no width and pivoted at its centre, the set of possible outcomes is the set of all angles that the final position of the spinner makes with some fixed direction, equivalently all real numbers in [0, 2π). Many measurements in the natural and social sciences, such as volume, voltage, temperature, reaction time, marginal income, and so on, are made on continuous scales and at least in theory involve infinitely many possible values. If the repeated measurements on different subjects or at different times on the same subject can lead to different outcomes, probability theory is a possible tool to study this variability.

Because of their comparative simplicity, experiments with finite sample spaces are discussed first. In the early development of probability theory, mathematicians considered only those experiments for which it seemed reasonable, based on considerations of symmetry, to suppose that all outcomes of the experiment were “equally likely.” Then in a large number of trials all outcomes should occur with approximately the same frequency. The probability of an event is defined to be the ratio of the number of cases favourable to the event—i.e., the number of outcomes in the subset of the sample space defining the event—to the total number of cases. Thus, the 36 possible outcomes in the throw of two dice are assumed equally likely, and the probability of obtaining “six” is the number of favourable cases, 5, divided by 36, or 5/36.

Approaches 

Classical or Mathematical Definition of Probability

Let’s say that an experiment can result in (m + n), equally likely, mutually exclusive, and exhaustive cases. Also, ‘m’ cases are favorable to the occurrence of an event ‘A’ and the remaining ‘n’ are against it. In such cases, the definition of the probability of occurrence of the event ‘A’ is the following ratio:

𝑚𝑚+𝑛 = \( \frac {\text {Number of cases favorable to the occurrence of the event ‘A’}}{\text {Total number of equally likely, mutually exclusive, and exhaustive cases}} \)

The probability of the occurrence of the event ‘A’ is P(A). Further, P(A) always lies between 0 and 1. These are the limits of probability.

Instead of saying that the probability of the occurrence of the event ‘A’ is 𝑚𝑚+𝑛, we can say that “Odds are m to n in favor of event A or n to m against the event A.” Therefore,

Odds in favor of the event A = No. of cases favorable to the occurrence of the event ANo. of cases against the occurrence of the event A = 𝑚𝑚+𝑛𝑛𝑚+𝑛 = 𝑚𝑛

Odds against the event A = Number of cases against the occurrence of the event ANumber of cases in favour of the occurrence of the event A = 𝑛𝑚

Note: The ratio 𝑚𝑛 or 𝑛𝑚 is always expressed in its lowest form (integers with no common factors).

  • If m = 0, or if the number of cases favorable to the occurrence of the event A = 0 then, P(A) = 0. In other words, event A is an impossible event.
  • If n = m, then P(A) = 𝑚𝑚 = 1. This means that the event A is a certain or sure event.
  • If neither m = 0 nor n = 0, then the probability of occurrence of any event A is always less than 1. Therefore, the probability of occurrence the event satisfies the relation 0 < P < 1.

If the events are mutually exclusive and exhaustive, then the sum of their individual probabilities of occurrence = 1.

For example, if A, B, and C are three mutually exclusive events, then P(A) + P(B) + P(C) = 1. The probability of the occurrence of one particular event is the Marginal Probability of that event.

Choosing an object at random from N objects means that each object has the same probability 1𝑁 of being chosen.

probability theory

                                                                                                                                                   Source: Wikipedia

Empirical Probability or Relative Frequency Probability Theory

The Relative Frequency Probability Theory is as follows:

We can define the probability of an event as the relative frequency with which it occurs in an indefinitely large number of trials. Therefore, if an event occurs ‘a’ times out of ‘n’, then its relative frequency is 𝑎𝑛.

Further, the value that 𝑎𝑛 approaches when ‘n’ becomes infinity is the limit of the relative frequency.

Symbolically,

P(A) = lim𝑛𝑎𝑛

However, in practice, we write the estimate of P(A) as follows:

P(A) = 𝑎𝑛

While the classical probability is normally encountered in problems dealing with games of chance. On the other hand, the empirical probability is the probability derived from past experience and is used in many practical problems.

Total Probability Theorem or the Addition Rule of Probability

If A and B are two events, then the probability that at least one of then occurs is P(A∪B). We also have,

P(A∪B) = P(A) + P(B) – P(A∩B)

If the two events are mutually exclusive,  then P(A∩B) = 0. In such cases, P(A∪B) = P(A) + P(B).

probability theory

Multiplication Rule

If A and B are two events, the probability of their joint or simultaneous occurrence is:

P(A∩B) = P(A) . P(A/B)

If the events are independent, then

  • P(A/B) = P(A)
  • P(B/A) = P(B)

 

probability theory

Therefore, we now have,

If the events are independent, then P(A∪B) = P(A) + P(B) – P(A∩B)

Also, P(A/B) is the conditional probability of the occurrence of the event A when event B has already occurred. Similarly, P(B/A) is the conditional probability of the occurrence of event B when event A has already occurred. If the events are independent, then the occurrence of A does not affect the occurrence of B.

∴ P(B/A) = P(B)

Also, P(A/B) = P(A)


All Questions - MCO-021 - MANAGERIAL ECONOMICS - Masters of Commerce (Mcom) - First Semester 2024

                           IGNOU ASSIGNMENT SOLUTIONS          MASTER OF COMMERCE (MCOM - SEMESTER 1)                    MCO-021 - MANAGERIA...