Talk:Stochastic convergence: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Greg Woodhouse
(Relationships between the types of convergence - mea culpa)
mNo edit summary
 
(26 intermediate revisions by 6 users not shown)
Line 1: Line 1:
{{checklist
{{subpages}}
|                abc = Stochastic convergence
|                cat1 = Mathematics
|                cat2 = Physics
|                cat3 = Chemistry
|          cat_check = n
|              status = 2
|        underlinked = n
|            cleanup = y
|                  by = [[User:Greg Woodhouse|Greg Woodhouse]] 20:33, 28 June 2007 (CDT), [[User:Ragnar Schroder|Ragnar Schroder]] 11:13, 28 June 2007 (CDT)
}}
 


==Work in progress==
==Work in progress==
Line 43: Line 32:
:Ragnar, please do, science - specially as ''encyclopedic science'' should be available and understandable to as many as possible. Even to the level where analogons are used to visualize a point even when the analogon isn't scientifically correct. If it helps to make laymen understand a topic that is what I would like to call ''academic freedom in educational and didactical sense''. Please continue and make it easier. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span> 16:49, 28 June 2007 (CDT)
:Ragnar, please do, science - specially as ''encyclopedic science'' should be available and understandable to as many as possible. Even to the level where analogons are used to visualize a point even when the analogon isn't scientifically correct. If it helps to make laymen understand a topic that is what I would like to call ''academic freedom in educational and didactical sense''. Please continue and make it easier. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span> 16:49, 28 June 2007 (CDT)


::Yes, the definition is better, but what does it mean? I assume <math>\scriptstyle X_i</math> represents a stochastic process of some sort. Are you saying that as an ordinary function of ''i'', the limit is ''a'' with probability 1? If it's just a function, why is probability involved? Or are you saying that as <math>\scriptstyle i \rightarrow \infty</math>, the distance <math>| X_i - a |</math> approaches 0 with probability 1 (whatever that might mean)? The definition still needs to be fleshed out a bit.
::Yes, the definition is better, but what does it mean? I assume <math>\scriptstyle X_i</math> represents a stochastic process of some sort. Are you saying that as an ordinary function of ''i'', the limit is ''a'' with probability 1? If it's just a function, why is probability involved?  
 
:::That's what I mean when I say I don't like the compact expression  - it's too confusing,  especially when the limit itself is a stochastic variable.  The formal definition used in my old textbook is <math>P(\lim_{i \to \infty} X_i = X) = 1 </math>. Grad students may enjoy getting something like that thrown at them,  but not a general audience.  I hope the definition section is clear now.
[[User:Ragnar Schroder|Ragnar Schroder]] 14:04, 3 July 2007 (CDT)
 
::::That makes more sense. It isn't actually stated that ''a'' is a stochastic process. I'll be honest: When I firswt looked at this, I thought: "What does this mean? It looks like you're saying that a sequence of stochastic processes converges to a ''number''?" [[User:Greg Woodhouse|Greg Woodhouse]] 14:38, 3 July 2007 (CDT)
 
----
 
::Or are you saying that as <math>\scriptstyle i \rightarrow \infty</math>, the distance <math>| X_i - a |</math> approaches 0 with probability 1 (whatever that might mean)? The definition still needs to be fleshed out a bit.


::On an intuitive level, I think it's clear enough what you mean: the variable <math>\scriptstyle X_i</math> represents a "random walk", that gets you closer and closer to ''a''. You don't ''know'' where you will be after i steps, but you ''do'' know that the probability that you will be any sizeable distance from a becomes vanishingly small as ''i'' approaches infinity. How can you translate that into mathematical language? [[User:Greg Woodhouse|Greg Woodhouse]] 17:03, 28 June 2007 (CDT)
::On an intuitive level, I think it's clear enough what you mean: the variable <math>\scriptstyle X_i</math> represents a "random walk", that gets you closer and closer to ''a''. You don't ''know'' where you will be after i steps, but you ''do'' know that the probability that you will be any sizeable distance from a becomes vanishingly small as ''i'' approaches infinity. How can you translate that into mathematical language? [[User:Greg Woodhouse|Greg Woodhouse]] 17:03, 28 June 2007 (CDT)
:::You focus on the convergence in probability implication of a.s. convergence here. I think a detailed discussion of the relation between the two should be mentioned ''after'' the introduction to the various modes of convergence,  in a new subsection to the /* Relations between the different modes of convergence */ section.
[[User:Ragnar Schroder|Ragnar Schroder]] 14:04, 3 July 2007 (CDT)
=== Almost sure convergence to a random variable ===
I don't understand this fragment:
:"The number a may also be the outcome of a stochastic variable X.  In that case the compact notation <math>P(\lim_{i \to \infty} X_i = X) = 1 </math> is often used."
Isn't it the case that if a stochastic variable converges almost surely, the limit is a deterministic constant and not a stochastic variable? That is, <math>P(\lim_{i \to \infty} X_i = X) = 1 </math> can only be true if ''X'' takes some value with probability one and is thus not really a stochastic variable. Or do you implicitly allow for the possibility that ''X'' depends on the {''X''<sub>''i''</sub>} ? -- [[User:Jitse Niesen|Jitse Niesen]] 10:13, 5 July 2007 (CDT)
::I'm glad I'm not the only one that finds this notation confusing. Unless, I'm wrong, this is just another way of saying that a sequence of measurable functions converges to a measurable function ''X'' a.e. (pointwise), convergence [&hellip;]
:::I had kind of forgotten about this,  thanks for reminding me :-) .
:::An easy example is this: Let X be [[Normal distribution|normally distributed]] and <math>X_n= \sqrt{2}</math> when <math> x\in Q </math> ,  <math> X_n=X+ \frac{1}{n} </math> for <math> x \in R </math>  outside Q.
:::Anyway, I hesitate to let this cat (about sets of measure 0) out of the bag and into the article.
:::I'm not really into measure theory,  so I'm not completely sure I understood you correctly. Hope this can be understood as an answer anyway.  :-) .
:::[[User:Ragnar Schroder|Ragnar Schroder]] 12:19, 5 July 2007 (CDT)
::[&hellip;] in probability is just convergence in measure, and convergence in the pth order mean is just L<sup>p</sup> convergence. The strong law of large numbers (the hypotheses of which I can look up when I get home) ensures convergence a.e. (I think!). I also think(!) that it appliews to i.i.d. random variables, and, in particular to Bernoulli trials. But I'm a little out of my league at this level of probability theory. [[User:Greg Woodhouse|Greg Woodhouse]] 11:20, 5 July 2007 (CDT)
:The compact notation <math>P(\lim_{i \to \infty} X_i = X) = 1 </math> *is* confusing.  Unfortunately,  it seems to be  traditional,  so I guess it has to be included,  even though a small rather than  cap X would be more intuitive.  Think of it this way:  The value of X is obtained in advance,  f.i. by throwing a die.  Then somehow the sequence converges as to that value. 
:However,  this is a simplification - the value of X may be unknown in advance,  and the sequence <math>X_i</math> then basically may tell us more and more what the outcome of X would be,  had we performed that particular random experiment.
:I'll try to think of a simple,  concrete example to illustrate this. [[User:Ragnar Schroder|Ragnar Schroder]] 11:14, 5 July 2007 (CDT)
::Are you thinking about something like the following? The random variable ''X'' is some unknown physical quantity which we want to measure (we need some assumption on the distribution of ''X''; let's say it's normal). We make individual measurements which we denote by <math>X_i</math>. Write <math>X_k = X + N_k</math> and assume that the measurement errors <math>N_i</math> are independent and randomly distributed, say with some normal distribution. Finally, let <math>\textstyle S_k = \sum_{i=1}^k X_k/k</math> denote the mean of the first ''k'' measurements. Then <math>P(\lim_{i \to \infty} S_i = X) = 1 </math> (the sample mean converges almost surely to the true value) by the strong law of large numbers.
:: However, if you think the notation <math>P(\lim_{i \to \infty} X_i = X) = 1 </math> is confusing, and you don't need it in the article, then you might consider moving it to [[almost sure convergence]].
-- [[User:Jitse Niesen|Jitse Niesen]] 04:54, 7 July 2007 (CDT)


=== examples? ===
=== examples? ===
Line 51: Line 84:
Can anyone think of a non-trivial example of a.s. convergence (i.e., one where the sequence doesn't eventually become constant)? [[User:Greg Woodhouse|Greg Woodhouse]] 20:59, 28 June 2007 (CDT)
Can anyone think of a non-trivial example of a.s. convergence (i.e., one where the sequence doesn't eventually become constant)? [[User:Greg Woodhouse|Greg Woodhouse]] 20:59, 28 June 2007 (CDT)
:many chemical processes and descriptions thereof are using stochastics I will have to see if I can pop up with a decent example. But as example (top of my head) it can be used to derive much of physical chemistry. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span>
:many chemical processes and descriptions thereof are using stochastics I will have to see if I can pop up with a decent example. But as example (top of my head) it can be used to derive much of physical chemistry. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span>
Well, I did a bit of looking and the strong law of large numbers has as its conlusion a.s. convergence. (I didn't know that. I was aware of the weak law of large numbers, but never bothered to look up the strong version.) Anyway, I added this as a third example. [[User:Greg Woodhouse|Greg Woodhouse]] 21:52, 28 June 2007 (CDT)
:An example of non-trivial a.s. convergence:
:Assume a guy has two sources of income, one stochastic.  On a given day ''j'' he gets $<math>X_j</math> from the stochastic one,  and $<math>f(j)</math> from the other.  Assume the first one converges almost surely to 0.  Then obviously his total income <math>U_j=X_j+f(t)</math> converges almost surely to <math>f(t)</math>.
[[User:Ragnar Schroder|Ragnar Schroder]] 23:20, 28 June 2007 (CDT)
I've deleted example 3. [[User:Greg Woodhouse|Greg Woodhouse]] 23:38, 28 June 2007 (CDT)


==Categories==
==Categories==
Line 62: Line 104:


Oops...I've got mud on my face here! Looking a bit more closely, I see that you're saying that L<sup>p</sup> convergence implies convergence in measure. This is true. [[User:Greg Woodhouse|Greg Woodhouse]] 21:44, 28 June 2007 (CDT)
Oops...I've got mud on my face here! Looking a bit more closely, I see that you're saying that L<sup>p</sup> convergence implies convergence in measure. This is true. [[User:Greg Woodhouse|Greg Woodhouse]] 21:44, 28 June 2007 (CDT)
== Added example 3 is under wrong convergence ==
Example 3 of the Almost sure convergence section should be moved to the Convergence in probability section.
Rather surprisingly,  the sequence does NOT converge a.s,  only P,  AFAIK.
[[User:Ragnar Schroder|Ragnar Schroder]] 22:58, 28 June 2007 (CDT)
:Ok.  It DOES converge a.s. as well as P. A.s. convergence follows from the "Strong law of large numbers",  P convergence from Khinchin's theorem,  aka the "weak law of large numbers".
:I think the example was very good,  and should be put back either where it was,  or under convergence in probability.
[[User:Ragnar Schroder|Ragnar Schroder]] 22:09, 30 June 2007 (CDT)
== stochastic variable ==
unfortunately here an even simple definition of what
*stochastic variation
*stochastic variable
is is omitted. It would clarify much. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span>
Why not post a definition here (on the talk page), then we can "talk" about what to put in the article? I've been confused by the term, too, because it sounds almost like a conflation of two (more standard) terms: random variable and stochastic process. If we're talking about limits, the latter is probably
:Stochastic variable=Random variable,  some math textbooks use them as [http://www.thefreedictionary.com/stochastic+variable synonyms].  I just replaced the term "stochastic variable" by "random variable",  for the reasons you gave.
: I don't like the term "random variable" very much in this context, though,  because it may confuse a casual non-educated reader - he may think a "random variable" is a "variable" picked at "random". 
:I think this *overview* article should be as noob friendly as humanly possible, less accessible ideas should be reserved for the 4 in-depth articles.
:[[User:Ragnar Schroder|Ragnar Schroder]] 14:31, 6 July 2007 (CDT)
intended, but entropy is associated with a random variable (the kinetic energy of a molecule, say) and not a process (like a random walk, or arrival of data frames on a network link).  In any case, using standard terminology would help ''me'' out quite a bit! [[User:Greg Woodhouse|Greg Woodhouse]] 23:27, 5 July 2007 (CDT)
That didn't come out right at all: I didn't mean to imply I was an expert on stochastic processes (I'm not), only that I think definitions would make it
easier for us to understand one another, and that will make it easier for us to
:Cool.  Thanks for the feedback,  I appreciate it,  I'm here to learn as well as teach.
[[User:Ragnar Schroder|Ragnar Schroder]] 14:31, 6 July 2007 (CDT)
discuss the article itself. In case you're wondering: my interest here stems from wanting to better understand performance in distributed systems, and an interest in various aspects of computers as physical systems and the descriptive capacity of languages (both natural and artificial). I know that sounds like a collection of topics not having much to do with eachother, but they fit together rather nicely -- really. [[User:Greg Woodhouse|Greg Woodhouse]] 23:42, 5 July 2007 (CDT)
==stochastic==
one of the basic things that is missing is the variable time. Any stochastic process consists of a slowly variable part [e.g. f(A)) and a fast repeating disturbance of ever decreasing amplitude B(t,µ). In this case B is the stochastic variation and µ the stochastic variable. µ can be in itself be a function of time and another variable i.e. de path length of a random walk, its angle. It might lead to misunderstanding to call this variable µ a random variable as it is a stochastic variable that has (per its definition) the possibiklity to have random (yet ever decreasing) values. The randomness is dependent upon the way the stochastic processes are being described by science. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span> 14:59, 6 July 2007 (CDT)
:That's the difference between a random variable and a stochastic process. In a random walk, where you end up after n steps (or just the direction you move at a given time) is a random variable. A ''stochastic process'' associates a random variable with each time, typically in such a way that the transition probabilities follow reasonable distributions. [[User:Greg Woodhouse|Greg Woodhouse]] 20:10, 6 July 2007 (CDT)
:you got it, point is the total: process = A(x,y,z) + B(x',y',z', t) is called the stochastic process only part B is the rapidly varying part/trail. [[User:Robert Tito|Robert Tito]]&nbsp;|&nbsp;<span style="background:grey">&nbsp;<font color="yellow"><b>[[User talk:Robert Tito|Talk]]</b></font>&nbsp;</span> 20:39, 6 July 2007 (CDT)
==Reminder to myself==
The octave code was
n=[0:1:199];
x=normal_rnd(0,1,1,200); y=(40*x).^2./(n.+1);
plot(n,y,"3");
Do the calculation P{|yn|<epsilon} before putting any formula in, or find a better function.
[[User:Ragnar Schroder|Ragnar Schroder]] 15:48, 19 November 2007 (CST)
== Disambiguating convergence ==
"Convergence" has several meanings in computer science, some quite precise, some recognized as approximations of a dynamic process, and some marketing-speak that is still widely used. I have started a page, [[convergence (disambiguation)]].
In some areas, it is understood that "convergence" will be qualified with an adjective such as "stochastic" or routing. Unfortunately, in other areas, such as the "convergence" of data, voice, and video services onto the Internet, there is no customary qualifier.
Please look at this disambiguation page and put in whatever additional text, if any, seems appropriate. The disambiguation page is unfortunately fragmentary.[[User:Howard C. Berkowitz|Howard C. Berkowitz]] 09:05, 14 July 2008 (CDT)
:The text on the disambiguation page seems good to me.
:[[User:Ragnar Schroder|Ragnar Schroder]] 20:16, 27 September 2008 (CDT)

Latest revision as of 12:25, 1 October 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition A mathematical concept intended to formalize the idea that a sequence of essentially random or unpredictable events sometimes is expected to settle into a pattern. [d] [e]
Checklist and Archives
 Workgroup categories Mathematics, Physics and Chemistry [Categories OK]
 Talk Archive none  English language variant British English

Work in progress

This page is a work in progress, I'm struggling with the latex and some other stuff, there may be actual errors now.

Ragnar Schroder 12:04, 28 June 2007 (CDT)

Almost sure convergence

The definition in the text doesn't make sense. In general, something is true almost surely (a.s.) if it is true with a probability of 1. It is almost surely true that a randomly chosen number is not 4. Greg Woodhouse 12:08, 28 June 2007 (CDT)

Greg, some would argue if the probability has a chance of being not true ≤0.01‰ the convergenge to a value is true, and the tail can be forgotten. Compare to series. Robert Tito |  Talk  12:44, 28 June 2007 (CDT)

I understand. It's a technical concept from measure theory. If two functions (say f and g) are equal except on a set of measure 0, we say f = g almost everywhere. This is important because their (Lebesgue) integrals over a given set will always be equal and it is convenient to identify such functions because then if we define

defines a metric on , giving it the structure of a metric space. If we didn't identify functions that wee equal almost everywhere (i.e., treat them as the same function), the purported metric would not be positive definite.

On a foundational level, probability theory is essentially measure theory, (and distributions are just measurable functions that, when integrated ovderf a set A give the probability of A). It is just a convention that probability theorists use the phrase "almost surely" instead of "almost everywhere", the meanings are the same. Of course, I'm almost sure :) you already know this! Greg Woodhouse 13:28, 28 June 2007 (CDT)


Almost sure convergence

I agree with all this, but I think stochastic convergence is a concept that should be accessible to intelligent laymen, not just math/natsci/tech guys.

Yes, but this is the talk page. :-) Greg Woodhouse 17:05, 28 June 2007 (CDT)

Therefore, I try hard to avoid reference to hard-core math like measure theory in the beginning of the article, such things should come at the end, after the non-expert has gotten as enlightened as possible wrt the basic ideas.

I've corrected the tex code in the definition. The definition is standard textbook fare, but I really don't like it, I'll try find one that's more intuitive.

Ragnar Schroder 16:30, 28 June 2007 (CDT)

Ragnar, please do, science - specially as encyclopedic science should be available and understandable to as many as possible. Even to the level where analogons are used to visualize a point even when the analogon isn't scientifically correct. If it helps to make laymen understand a topic that is what I would like to call academic freedom in educational and didactical sense. Please continue and make it easier. Robert Tito |  Talk  16:49, 28 June 2007 (CDT)
Yes, the definition is better, but what does it mean? I assume represents a stochastic process of some sort. Are you saying that as an ordinary function of i, the limit is a with probability 1? If it's just a function, why is probability involved?
That's what I mean when I say I don't like the compact expression - it's too confusing, especially when the limit itself is a stochastic variable. The formal definition used in my old textbook is . Grad students may enjoy getting something like that thrown at them, but not a general audience. I hope the definition section is clear now.

Ragnar Schroder 14:04, 3 July 2007 (CDT)

That makes more sense. It isn't actually stated that a is a stochastic process. I'll be honest: When I firswt looked at this, I thought: "What does this mean? It looks like you're saying that a sequence of stochastic processes converges to a number?" Greg Woodhouse 14:38, 3 July 2007 (CDT)

Or are you saying that as , the distance approaches 0 with probability 1 (whatever that might mean)? The definition still needs to be fleshed out a bit.
On an intuitive level, I think it's clear enough what you mean: the variable represents a "random walk", that gets you closer and closer to a. You don't know where you will be after i steps, but you do know that the probability that you will be any sizeable distance from a becomes vanishingly small as i approaches infinity. How can you translate that into mathematical language? Greg Woodhouse 17:03, 28 June 2007 (CDT)
You focus on the convergence in probability implication of a.s. convergence here. I think a detailed discussion of the relation between the two should be mentioned after the introduction to the various modes of convergence, in a new subsection to the /* Relations between the different modes of convergence */ section.

Ragnar Schroder 14:04, 3 July 2007 (CDT)

Almost sure convergence to a random variable

I don't understand this fragment:

"The number a may also be the outcome of a stochastic variable X. In that case the compact notation is often used."

Isn't it the case that if a stochastic variable converges almost surely, the limit is a deterministic constant and not a stochastic variable? That is, can only be true if X takes some value with probability one and is thus not really a stochastic variable. Or do you implicitly allow for the possibility that X depends on the {Xi} ? -- Jitse Niesen 10:13, 5 July 2007 (CDT)

I'm glad I'm not the only one that finds this notation confusing. Unless, I'm wrong, this is just another way of saying that a sequence of measurable functions converges to a measurable function X a.e. (pointwise), convergence […]


I had kind of forgotten about this, thanks for reminding me :-) .
An easy example is this: Let X be normally distributed and when , for outside Q.
Anyway, I hesitate to let this cat (about sets of measure 0) out of the bag and into the article.
I'm not really into measure theory, so I'm not completely sure I understood you correctly. Hope this can be understood as an answer anyway.  :-) .
Ragnar Schroder 12:19, 5 July 2007 (CDT)
[…] in probability is just convergence in measure, and convergence in the pth order mean is just Lp convergence. The strong law of large numbers (the hypotheses of which I can look up when I get home) ensures convergence a.e. (I think!). I also think(!) that it appliews to i.i.d. random variables, and, in particular to Bernoulli trials. But I'm a little out of my league at this level of probability theory. Greg Woodhouse 11:20, 5 July 2007 (CDT)
The compact notation *is* confusing. Unfortunately, it seems to be traditional, so I guess it has to be included, even though a small rather than cap X would be more intuitive. Think of it this way: The value of X is obtained in advance, f.i. by throwing a die. Then somehow the sequence converges as to that value.
However, this is a simplification - the value of X may be unknown in advance, and the sequence then basically may tell us more and more what the outcome of X would be, had we performed that particular random experiment.
I'll try to think of a simple, concrete example to illustrate this. Ragnar Schroder 11:14, 5 July 2007 (CDT)
Are you thinking about something like the following? The random variable X is some unknown physical quantity which we want to measure (we need some assumption on the distribution of X; let's say it's normal). We make individual measurements which we denote by . Write and assume that the measurement errors are independent and randomly distributed, say with some normal distribution. Finally, let denote the mean of the first k measurements. Then (the sample mean converges almost surely to the true value) by the strong law of large numbers.
However, if you think the notation is confusing, and you don't need it in the article, then you might consider moving it to almost sure convergence.

-- Jitse Niesen 04:54, 7 July 2007 (CDT)

examples?

Can anyone think of a non-trivial example of a.s. convergence (i.e., one where the sequence doesn't eventually become constant)? Greg Woodhouse 20:59, 28 June 2007 (CDT)

many chemical processes and descriptions thereof are using stochastics I will have to see if I can pop up with a decent example. But as example (top of my head) it can be used to derive much of physical chemistry. Robert Tito |  Talk 

Well, I did a bit of looking and the strong law of large numbers has as its conlusion a.s. convergence. (I didn't know that. I was aware of the weak law of large numbers, but never bothered to look up the strong version.) Anyway, I added this as a third example. Greg Woodhouse 21:52, 28 June 2007 (CDT)

An example of non-trivial a.s. convergence:
Assume a guy has two sources of income, one stochastic. On a given day j he gets $ from the stochastic one, and $ from the other. Assume the first one converges almost surely to 0. Then obviously his total income converges almost surely to .

Ragnar Schroder 23:20, 28 June 2007 (CDT)

I've deleted example 3. Greg Woodhouse 23:38, 28 June 2007 (CDT)

Categories

I'm uncertain about the categories for this article, it's now listed under Mathematics, Physics and Chemistry. AFAIK, Stochastic Differential Equations is used a lot in Economics, maybe more than in f.i. chemistry???

Other areas include computer science,operations research and mathematical biology. If we tried to include every are where these ideas can be applied, the list could become a bit unwieldy! On the other hand, that doesn't mean we shouldn't include certain core applications areas, there really are no hard and fast rules. Greg Woodhouse 20:52, 28 June 2007 (CDT)

Relationships between the types of convergence

I commented out the last claim in this section because I believe it is false as stated. Perhaps you had something different in mind here. Greg Woodhouse 21:34, 28 June 2007 (CDT)

Oops...I've got mud on my face here! Looking a bit more closely, I see that you're saying that Lp convergence implies convergence in measure. This is true. Greg Woodhouse 21:44, 28 June 2007 (CDT)

Added example 3 is under wrong convergence

Example 3 of the Almost sure convergence section should be moved to the Convergence in probability section.

Rather surprisingly, the sequence does NOT converge a.s, only P, AFAIK.

Ragnar Schroder 22:58, 28 June 2007 (CDT)

Ok. It DOES converge a.s. as well as P. A.s. convergence follows from the "Strong law of large numbers", P convergence from Khinchin's theorem, aka the "weak law of large numbers".
I think the example was very good, and should be put back either where it was, or under convergence in probability.

Ragnar Schroder 22:09, 30 June 2007 (CDT)

stochastic variable

unfortunately here an even simple definition of what

  • stochastic variation
  • stochastic variable

is is omitted. It would clarify much. Robert Tito |  Talk 

Why not post a definition here (on the talk page), then we can "talk" about what to put in the article? I've been confused by the term, too, because it sounds almost like a conflation of two (more standard) terms: random variable and stochastic process. If we're talking about limits, the latter is probably

Stochastic variable=Random variable, some math textbooks use them as synonyms. I just replaced the term "stochastic variable" by "random variable", for the reasons you gave.
I don't like the term "random variable" very much in this context, though, because it may confuse a casual non-educated reader - he may think a "random variable" is a "variable" picked at "random".
I think this *overview* article should be as noob friendly as humanly possible, less accessible ideas should be reserved for the 4 in-depth articles.
Ragnar Schroder 14:31, 6 July 2007 (CDT)


intended, but entropy is associated with a random variable (the kinetic energy of a molecule, say) and not a process (like a random walk, or arrival of data frames on a network link). In any case, using standard terminology would help me out quite a bit! Greg Woodhouse 23:27, 5 July 2007 (CDT)

That didn't come out right at all: I didn't mean to imply I was an expert on stochastic processes (I'm not), only that I think definitions would make it easier for us to understand one another, and that will make it easier for us to

Cool. Thanks for the feedback, I appreciate it, I'm here to learn as well as teach.

Ragnar Schroder 14:31, 6 July 2007 (CDT)


discuss the article itself. In case you're wondering: my interest here stems from wanting to better understand performance in distributed systems, and an interest in various aspects of computers as physical systems and the descriptive capacity of languages (both natural and artificial). I know that sounds like a collection of topics not having much to do with eachother, but they fit together rather nicely -- really. Greg Woodhouse 23:42, 5 July 2007 (CDT)

stochastic

one of the basic things that is missing is the variable time. Any stochastic process consists of a slowly variable part [e.g. f(A)) and a fast repeating disturbance of ever decreasing amplitude B(t,µ). In this case B is the stochastic variation and µ the stochastic variable. µ can be in itself be a function of time and another variable i.e. de path length of a random walk, its angle. It might lead to misunderstanding to call this variable µ a random variable as it is a stochastic variable that has (per its definition) the possibiklity to have random (yet ever decreasing) values. The randomness is dependent upon the way the stochastic processes are being described by science. Robert Tito |  Talk  14:59, 6 July 2007 (CDT)

That's the difference between a random variable and a stochastic process. In a random walk, where you end up after n steps (or just the direction you move at a given time) is a random variable. A stochastic process associates a random variable with each time, typically in such a way that the transition probabilities follow reasonable distributions. Greg Woodhouse 20:10, 6 July 2007 (CDT)
you got it, point is the total: process = A(x,y,z) + B(x',y',z', t) is called the stochastic process only part B is the rapidly varying part/trail. Robert Tito |  Talk  20:39, 6 July 2007 (CDT)


Reminder to myself

The octave code was n=[0:1:199]; x=normal_rnd(0,1,1,200); y=(40*x).^2./(n.+1); plot(n,y,"3"); Do the calculation P{|yn|<epsilon} before putting any formula in, or find a better function.

Ragnar Schroder 15:48, 19 November 2007 (CST)

Disambiguating convergence

"Convergence" has several meanings in computer science, some quite precise, some recognized as approximations of a dynamic process, and some marketing-speak that is still widely used. I have started a page, convergence (disambiguation).

In some areas, it is understood that "convergence" will be qualified with an adjective such as "stochastic" or routing. Unfortunately, in other areas, such as the "convergence" of data, voice, and video services onto the Internet, there is no customary qualifier.

Please look at this disambiguation page and put in whatever additional text, if any, seems appropriate. The disambiguation page is unfortunately fragmentary.Howard C. Berkowitz 09:05, 14 July 2008 (CDT)

The text on the disambiguation page seems good to me.
Ragnar Schroder 20:16, 27 September 2008 (CDT)