Actually, it's a proto-function.
The idea, is that the function will read a list of items, and produce probabilities from a given chart, which may have sub-probabilities.
'Tis ugly, but that's because it's incomplete...
One wanting to make something similar, could use it as a base, but until I have a chance to finish it, it is, what it is.
The purpose of using 1d100, is because that gives a flat percentage chance of any given event. Depending on the list, each event will have a specific percentage window. Those windows can be spread out, as desired, if the random number generator has any level of prejudice.
(I mis-read your remark: You are correct, that I'm declaring a value without necessity. That's how I write functions... I produce effects, one line at a time, and once I test everything, and know that it's working, I then condense it, if applicable. It's sort of a habit, because in some older languages, you needed to do that. i.e. Every variable was declared, and then it could be assigned. You couldn't both declare a value, and give it a value, on the same line. After a lifetime of that kind of habit, you tend to follow that procedure, even when not mandatory.
I'm not sure if the compiler ultimately cares though; and having a 5% increase in local variables really should matter, in this regard. Aye, doing it in one instruction is cleaner, but in some cases, it can also make source code harder to read.
I also didn't comment that function at all, at all. I generally do commenting last.
I do typically reduce things, when I'm certain they work as intended. In this case, I'm not done with the function, as I need to adjust the basic percentages, and I need to create sub-tables. I posted it here, as an example of how to make items generate statistics on pick-up... It clearly doesn't do that, but it's a method that could be converted to produce what I believe to be the desired outcome.
That's also why i just copy+paste'd the item creation code blocks. I was containing everything within that function, to test it; particularly the frequency of drops.
it's just my normal procedure; which is a bit more formulaic than what people probably do today. When I fee that this is working as desired, then, and only then, will I reduce it to its minimum constituents, move the item generation to its own function, and begin working on imported tables of item sets. Those, will by their very nature, be arrays, as that;s the most convenient way to store the information.)
I also by habit, declare things, in order. I find it easier to construct something, when I know the exact flow of an operation; and I generally condense it at some future point.
Shifting values may be ideal, but it does not work for fixed numeric probabilities. It's not just a 5% chance that event X occurs, it's a specific window, between 01 and 05, out of 100. If you needed a list with millions of excludes, that would be a problem, but a single exclude, or a small list, with the calculation capacity of a modern system, will produce a result in under one second. Further, the smaller the list, and the greater the value pool the less likely that it would be to produce an excluded value.
It's theoretically possible to always produce an excluded value, but the probability of that is exponentially unprecedented. The actual probability of producing the same result is
c = (n) * ( (p) ^ (A) )
...where c is the number of permutations n is the number of possible vales per random set, p is the possible number of values per recurring random set, and A is the number of attempts beyond the first, to produce any result. For example:
1d10
c = (10) (10^0) , producing a number from 1-10, on the first attempt (A = 0)
c =10; 1:c = 1:10.
7d10
c = (10)*(10^6)
c = 10,000,000 ; 1:c - 1:10,000,000
3d6
c = (6)*(6^2)
c = 216
1:c = 1:216
The chance of producing any number of values (v) out of the possible number of values, is:
(v:c).
If you increase the number of excluded values (v), the probability chance remains v:c. This is necessary, when desiring, or requiring a specific number, rather than a raw probability.
If you roll 3d6, and are excluding 5, 24, and 11; then the probability of producing any of these values is 3:216
i.e. Three vales (v), and 216 chances (c).
You can then convert that into a statistical percentage:
(100 / c) * v
For the above example, (100/216) * 3 = 1.3888(recurring), or 1.39%.
We'll convert this to the decimal value .0139 , and assign this the value L.
Recurring events that produce an identical result, are the value R.
Now, with this statistical projection, you can predict the statistical probability of needing to repeat an operation:
B = (L^R).
Thus, if on the first attempt, you produce one of these values (a 1.39% chance), then on the second attempt, your likelihood of again producing any of the excluded values, sequentially falls to 0.00019321, or 0.019%.
A third attempt, again squares this value(B), now 0.00000268561, or .000267%.
Continued attempts produce exponentially diminishing probabilities of the operation producing specific results.
Note: I'm using basic algebraic expression here, rather than proper constants/variables. I clearly don;t mean mesons, or amperes, with my uses of A and B. Ordinarily, I'd be more concise, but I feel horrible today.
The correct formula, would be more like this, assuming I'm not too far gone to do this using correct probabilities expressions. (Anyone who wishes to contradict, or correct these, may feel free to do thus...):
∑ P(A) = ( (x1) · (x2xn) )
Where P(A) is the number of permutations x1 is the number of possible vales per random set, x2 is the possiblen umber of values per recurring random set and xn is the number of attempts, beyond the first.
∵ |A| : P(A)
Where |A| is the number of values, in the list of excluded values.
∑ E(A) = ( ( 100 ÷ P(A) ) · |A| )
∑ E(B) = ( E(A) ÷ 100 )
∴ P(A | B) ρ E(B)( E(B) ·xr )
Where xr is the number of times that you repeat this event.
∴ E(X)% = P(A | B)
I think my [ first ] above example is easier for most people to comprehend.
Edited by ZoriaRPG, 10 June 2014 - 07:01 AM.