ART

In probability theory, the sample space (also called sample description space[1] or possibility space[2]) of an experiment or random trial is the set of all possible outcomes or results of that experiment.[3] A sample space is usually denoted using set notation, and the possible ordered outcomes are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U (for "universal set"). The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.[4]

For example, if the experiment is tossing a coin, the sample space is typically the set {head, tail}, commonly written {H, T}.[5] For tossing two coins, the corresponding sample space would be {(head,head), (head,tail), (tail,head), (tail,tail)}, commonly written {HH, HT, TH, TT}.[6] If the sample space is unordered, it becomes {{head,head}, {head,tail}, {tail,tail}}.

For tossing a single six-sided die, the typical sample space is {1, 2, 3, 4, 5, 6} (in which the result of interest is the number of pips facing up).[7]

A subset of the sample space is an event, denoted by E. Referring to the experiment of tossing the coin, the possible events include E={H} and E={T}.[6]

A well-defined sample space is one of three basic elements in a probabilistic model (a probability space); the other two are a well-defined set of possible events (a sigma-algebra) and a probability assigned to each event (a probability measure function).

Another way to look as a sample space is visually. The sample space is typically represented by a rectangle, and the outcomes of the sample space denoted by points within the rectangle. The events are represented by ovals, and the points enclosed within the oval make up the event.[8]

Conditions of a sample space

[9] A set \( \Omega \) with outcomes \( {\displaystyle s_{1},s_{2},\ldots ,s_{n}}\) (i.e. \( {\displaystyle \Omega =\{s_{1},s_{2},\ldots ,s_{n}\}}) \) must meet some conditions in order to be a sample space:

The outcomes must be mutually exclusive, i.e. if \( s_{j} \) takes place, then no other \( s_{i} \) will take place, \( {\displaystyle \forall i,j=1,2,\ldots ,n\quad i\neq j} \).[4]
The outcomes must be collectively exhaustive, i.e., on every experiment (or random trial) there will always take place some outcome \( {\displaystyle s_{i}\in \Omega }\) for \( {\displaystyle i\in \{1,2,\ldots ,n\}} \).[4]
The sample space ( \( \Omega \) ) must have the right granularity depending on what we are interested in. We must remove irrelevant information from the sample space. In other words, we must choose the right abstraction (forget some irrelevant information).

For instance, in the trial of tossing a coin, we could have as a sample space \( {\displaystyle \Omega _{1}=\{H,T\}} \), where H stands for heads and T for tails. Another possible sample space could be \( {\displaystyle \Omega _{2}=\{H\&R,H\&NR,T\&R,T\&NR\}} \). Here, R stands for rains and N R {\displaystyle NR} NR not rains. Obviously, \( \Omega _{1} \) is a better choice than \( \Omega _{2} \) as we do not care about how the weather affects the tossing of a coin.
Multiple sample spaces

For many experiments, there may be more than one plausible sample space available, depending on what result is of interest to the experimenter. For example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks (Ace through King), while another could be the suits (clubs, diamonds, hearts, or spades).[3][10] A more complete description of outcomes, however, could specify both the denomination and the suit, and a sample space describing each individual card can be constructed as the Cartesian product of the two sample spaces noted above (this space would contain fifty-two equally likely outcomes). Still other sample spaces are possible, such as {right-side up, up-side down} if some cards have been flipped when shuffling.
Equally likely outcomes
Flipping a coin leads to a sample space composed of two outcomes that are almost equally likely.
A brass tack with point downward
Up or down? Flipping a brass tack leads to a sample space composed of two outcomes that are not equally likely.
Main article: Equally likely outcomes

Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely.[11] For any sample space with N equally likely outcomes, each outcome is assigned the probability 1/N.[12] However, there are experiments that are not easily described by a sample space of equally likely outcomes—for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely.[13]

Though most random phenomena do not have equally likely outcomes, it can be helpful to define a sample space in such a way that outcomes are at least approximately equally likely, since this condition significantly simplifies the computation of probabilities for events within the sample space. If each individual outcome occurs with the same probability, then the probability of any event becomes simply:[14]:346–347

\( P(event)={\frac {{\text{number of outcomes in event}}}{{\text{number of outcomes in sample space}}}} \)

For example, if two dice are thrown to generate two uniformly distributed integers, D1 and D2, each in the range [1...6], the 36 ordered pairs (D1 , D2) constitute a sample space of equally likely events. In this case, the above formula applies, such that the probability of a certain sum, say D1 + D2 = 5 is easily shown to be 4/36, since 4 of the 36 outcomes produce 5 as a sum. On the other hand, the sample space of the 11 possible sums, {2, ...,12} are not equally likely outcomes, so the formula would give an incorrect result (1/11).

Another example is having four pens in a bag. One pen is red, one is green, one is blue, and one is purple. Each pen has the same chance of being taken out of the bag. The sample space S={red, green, blue, purple}, consists of equally likely events. Here, P(red)=P(blue)=P(green)=P(purple)=1/4.[15]

Simple random sample
Main article: Simple random sample

In statistics, inferences are made about characteristics of a population by studying a sample of that population's individuals. In order to arrive at a sample that presents an unbiased estimate of the true characteristics of the population, statisticians often seek to study a simple random sample—that is, a sample in which every individual in the population is equally likely to be included.[14]:274–275 The result of this is that every possible combination of individuals who could be chosen for the sample has an equal chance to be the sample that is selected (that is, the space of simple random samples of a given size from a given population is composed of equally likely outcomes).[16]
Infinitely large sample spaces

In an elementary approach to probability, any subset of the sample space is usually called an event.[6] However, this gives rise to problems when the sample space is continuous, so that a more precise definition of an event is necessary. Under this definition only measurable subsets of the sample space, constituting a σ-algebra over the sample space itself, are considered events.

An example of an infinitely large sample space is measuring the lifetime of a light bulb. The corresponding sample space would be [0, infinity).[6]
See also

Parameter space
Probability space
Space (mathematics)
Set (mathematics)
Event (probability theory)
σ-algebra

References

Stark, Henry; Woods, John W. (2002). Probability and Random Processes with Applications to Signal Processing (3rd ed.). Pearson. p. 7. ISBN 9788177583564.
Forbes, Catherine; Evans, Merran; Hastings, Nicholas; Peacock, Brian (2011). Statistical Distributions (4th ed.). Wiley. p. 3. ISBN 9780470390634.
Albert, Jim (1998-01-21). "Listing All Possible Outcomes (The Sample Space)". Bowling Green State University. Retrieved 2013-06-25.
"UOR_2.1". web.mit.edu. Retrieved 2019-11-21.
Dekking, F.M. (Frederik Michel), 1946- (2005). A modern introduction to probability and statistics : understanding why and how. Springer. ISBN 1-85233-896-2. OCLC 783259968.
"Sample Space, Events and Probability" (PDF). Mathematics at Illinois.
Larsen, R. J.; Marx, M. L. (2001). An Introduction to Mathematical Statistics and Its Applications (3rd ed.). Upper Saddle River, NJ: Prentice Hall. p. 22. ISBN 9780139223037.
"Sample Spaces, Events, and Their Probabilities". saylordotorg.github.io. Retrieved 2019-11-21.
Tsitsiklis, John (Spring 2018). "Sample Spaces". Massachusetts Institute of Technology. Retrieved July 9, 2018.
Jones, James (1996). "Stats: Introduction to Probability - Sample Spaces". Richland Community College. Retrieved 2013-11-30.
Foerster, Paul A. (2006). Algebra and Trigonometry: Functions and Applications, Teacher's Edition (Classics ed.). Prentice Hall. p. 633. ISBN 0-13-165711-9.
"Equally Likely outcomes" (PDF). University of Notre Dame.
"Chapter 3: Probability" (PDF). Coconino Community College.
Yates, Daniel S.; Moore, David S.; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. Archived from the original on 2005-02-09.
"Probability I" (PDF). Queen Mary University of London. 2005.
"Simple Random Samples". web.ma.utexas.edu. Retrieved 2019-11-21.

Undergraduate Texts in Mathematics

Graduate Texts in Mathematics

Graduate Studies in Mathematics

Mathematics Encyclopedia

World

Index

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License