Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!
  • Guest, before posting your code please take these rules into consideration:
    • It is required to use our BBCode feature to display your code. While within the editor click < / > or >_ and place your code within the BB Code prompt. This helps others with finding a solution by making it easier to read and easier to copy.
    • You can also use markdown to share your code. When using markdown your code will be automatically converted to BBCode. For help with markdown check out the markdown guide.
    • Don't share a wall of code. All we want is the problem area, the code related to your issue.


    To learn more about how to use our BBCode feature, please click here.

    Thank you, Code Forum.

is "rand() % 100;" not very random?

null_reflections

Legendary Coder
I'm just wondering if what the folks on this stack is totally true, or maybe is outdated, and if it is true, then what they mean:


The thread is over 13 years old, but when I test that particular way to establish a range of numbers, the results do appear totally random. However, this is a pretty small sample, i'd suppose you'd need to get a thousands of numbers this way before you knew for sure.

First block is my code, second block is the output:

Code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>

int main (){

int i;

for ( i=0; i<30; i++ ) {

        sleep(1);
        srand(time(NULL));
        int n = rand() % 100;
        printf ("%d\n", n);

}

}

Code:
32
18
7
15
72
34
88
7
65
34
80
82
61
67
36
5
78
31
82
37
16
77
79
9
25
90
21
80
36
27

Numbers in the 30's happened to appear several times in this, but that could easily just be chance.
 
Solution
Based on your output and considering everything between 20 and 40, around 33% of the numbers are inside them.
Now, 20 numbers are 20% of that hundred so they should appear in a range of that percentage as well.

But there is no reason to study more since total number of outputs is only from a 30 rounds. To get reasonable amount of random numbers, you should generate thousands of them.

Now, i Idont have a C-compiler on this computer so iIl use js instead.
JavaScript:
/*  With this code, each number between1 and 100
    should be rrandomizedaround 100 times. */
  
let count = {};

for (let i = 0; i < 10000; i++) {
  let num = Math.floor(Math.random() * 100) + 1;
  if (!count[num]) {
    count[num] = 1;
  } else {
    count[num]++;
  }
}...
Based on your output and considering everything between 20 and 40, around 33% of the numbers are inside them.
Now, 20 numbers are 20% of that hundred so they should appear in a range of that percentage as well.

But there is no reason to study more since total number of outputs is only from a 30 rounds. To get reasonable amount of random numbers, you should generate thousands of them.

Now, i Idont have a C-compiler on this computer so iIl use js instead.
JavaScript:
/*  With this code, each number between1 and 100
    should be rrandomizedaround 100 times. */
  
let count = {};

for (let i = 0; i < 10000; i++) {
  let num = Math.floor(Math.random() * 100) + 1;
  if (!count[num]) {
    count[num] = 1;
  } else {
    count[num]++;
  }
}

console.log(count);

With that, all numbers were randomized usually between 85 to 120 times. We can say, that goes inside of averages.

The next program throws a coin 10 times and then tells how many times each side was thrown.

JavaScript:
/*  this program throws a coin 10 times and
    counts the number of times it lands on heads and tails
    and prints the result. */
  
let count = {
  heads: 0,
  tails: 0
};

for (let i = 0; i < 10; i++) {
  let flip = Math.random() < 0.5 ? 'heads' : 'tails';
  count[flip]++;
}

console.log(count);

I run this one for few times and only once the result was that both sides did land 5 times. Usually it was 7 to 3, 8 to 2 or 6 to 4. Once it was 9 against 1. So variations when used only small data sets are not just possible but propable.
 
Solution
This method is probably not the best way to establish a range of numbers. If you are looking to get a specific range, I suggest using a more efficient algorithm, such as using modulo arithmetic.
 
If you are looking to get a specific range, I suggest using a more efficient algorithm, such as using modulo arithmetic.
Or you mean just more random? More efficient implies getting more out of less resources, and it would only seem to matter if you just don't have enough resources to accomplish something...whereas i'd figure with some really small program like those above, that's not really what we are going for since that could easily be done by any sort of a modern personal computer very easily.
 
Last edited:
I dont think the subject was about what is proper method to generate random numbers, but have a conversation of how random the results are.
Correct me if im wrong.
 
Okay, so I'm accepting @EkBass 's post as an answer, because there's a slight tendency for selecting a lower range of numbers when generating random numbers with "%". I changed my code above to loop more times, and for rand to choose 0 through 9:

Code:
#out of 1000 generations
sed -n '/[0-4]/p' rand-test2 | wc -l
547

Code:
#out of 100 generations
sed -n '/[0-4]/p' rand-test | wc -l   
55

This validates the people in the stack exchange post too, but what I'm still wondering is why the % is geared towards lower numbers to begin with, and the simplest way to make the equation more random. Does making the arithmetic more designing the range more complicated just fix the problem as the answers on that page imply? I'm just a beginner with C and so I just to need break down into the simplest terms. Maybe there is a better header, when making recommendations, please stick with GNU/Linux because i like it better than windows overall.
 
Okay, so I'm accepting @EkBass 's post as an answer, because there's a slight tendency for selecting a lower range of numbers when generating random numbers with "%". I changed my code above to loop more times, and for rand to choose 0 through 9:

Actually one way to figure out is some pattern of numbers done by a man or computer. Computer tries to make a random numbers, which actually produces usually some random patterns, such as that numbers between 20 to 40 are more common than rest of numbers. One other way is to follow randomed numbers one by one. If you create 100 random numbers, it is possible that same number is randomed twice in a row. Or there is 17, 18 and 19 right after each other.

When you study numbers randomed by a human, you mostlikely cant find these atleast in so obvious ways because human thinks "Ok, last one was 19 so lets take something complately else" and ends up taking 80 as next number.
I dont remember the name of document, but there is a teacher who teaches a computer programming. At the end of season, he asks student to write a random number generator that throws a dice for 100 times and student should bring him the results but not the program.

This teacher with prettty good accuracy can tell, whos output is done by that program they created and who did not bother to make program but wrote the output by themself.
 
There is several ways how to round a number in C or in any languages. When using integers, its allways the full digit at front of decimals.

1.23 is 1
1.99999 is still 1.

You need to study C function round to make it rounder towards nearest full integer. Here is pretty good tutorial for it: C round() Function - Scaler Topics
I can work with that, thanks.

Actually one way to figure out is some pattern of numbers done by a man or computer. Computer tries to make a random numbers, which actually produces usually some random patterns, such as that numbers between 20 to 40 are more common than rest of numbers. One other way is to follow randomed numbers one by one. If you create 100 random numbers, it is possible that same number is randomed twice in a row. Or there is 17, 18 and 19 right after each other.

When you study numbers randomed by a human, you mostlikely cant find these atleast in so obvious ways because human thinks "Ok, last one was 19 so lets take something complately else" and ends up taking 80 as next number.
I dont remember the name of document, but there is a teacher who teaches a computer programming. At the end of season, he asks student to write a random number generator that throws a dice for 100 times and student should bring him the results but not the program.

This teacher with prettty good accuracy can tell, whos output is done by that program they created and who did not bother to make program but wrote the output by themself.
What?!
 
A way to figure out the difference between a man-made and computer-generated pattern of numbers is to study the random numbers generated. Computers usually produce more uniform patterns, such as having more numbers between 20 and 40. Man-made numbers may be more varied and often have different numbers in succession.

This phenomenon can also be seen in programming: a teacher may ask their students to write a random number generator program that throws a dice 100 times, and the teacher can accurately tell whose output was generated by the program and whose was written by hand.
 
Ill continue: Of course if human knows that hes or her output is validated like this way, he or she can adjust the output he or she creates. But when doing this with humans who does not know that their output is validated, then with prettu good accuracy we can determine whos output is done by computer program and who typed the numbers by himself
 
A way to figure out the difference between a man-made and computer-generated pattern of numbers is to study the random numbers generated.
Yeah, but to me the important thing to understand in this context is just that computers tend to more accurately/reliably do math than humans across the board. There are people who can flawlessly/quickly do arithmetic, but a computer could theoretically become like that person times a thousand/million. I'm definitely not that good at math, but good enough to make estimates etc.
 
Just wondering - is the discussion merely of theoretical interest, or are you really worried your random numbers are not "random enough" ? If the latter, how much randomness do you want or need ?

If I would need convincing of the randomness I would generate a million numbers, and use them as coordinates for points plotted in a 32000x32000 square. It it looks like uniform gray noise it's ok. If there is any pattern whatsoever, it is not. I haven't tried this though... but then again I'm not worried 🙂
 
Just wondering - is the discussion merely of theoretical interest, or are you really worried your random numbers are not "random enough" ?
Neither one -- I'm just posting for the sake of learning c, because the stack exchange topic above kinda goes all over the place and doesn't really discuss how those different formulas actually work from a syntax perspective. I understand that one thing you need to do is just offset the fact that rand(); tends to favor the lower half of the numbers by slightly biasing your program/function towards the upper half of the numbers.

No, there's no mission or carreer critical stuff going on here. Also, for the sake of theory, there isn't such thing as a "truly random number", you basically just create random numbers by making your pool of choices varied and enormous.
 
rand() does not inherently favor any range of numbers, as it is an algorithm that produces a pseudorandom sequence of numbers.

In general, when generating multiple random numbers, patterns can often emerge, as these numbers are pseudorandom and not truly random. To avoid this, it is important to create a large and varied pool of possible outcomes to draw from. How ever, since its likely that somekind of patterns does exist when producing big amount of randomed numbers, why we should avoid these patterns?
 
How ever, since its likely that somekind of patterns does exist when producing big amount of randomed numbers, why we should avoid these patterns?
Nobody here is avoiding these patterns: but what if you wanted to have your program cough up digit 0 through 9 in a "truly random" fashion? To me, that is interesting. What is a truly random number? Does it even exist?The fact that nobody seems to want to address the actual function that this is all based on head on makes it even MORE interesting. You and other programmers said that rand(); chooses low numbers, my data so far confirms that...even the ones i didn't post. Why?

I think I need to re-iterate something i said before if this thread is to stay open:

I am just a beginner with C

So i get that C by itself isn't very useful, and so you just plug all these pieces of it together with function-legos, but the point is that it's still made by some kind syntax, text, button, or circuits in the on-off fashion.

Sorry if i come off as being rude, but i thought part what programmers do is also de-code things. You don't have to respond, you can even down vote my post (unless it doesn't let you have negative numbers)? I am literally a stupid and inexperienced programmer and computer user, just play it like that.
 
Last edited:
What comes to computers, there is not a truelly random numbers. Its tied to the excact moment when the number is produced or a seed passed to a function what produces these numbers.

On the other hand, throwing a dice does not produce a random number but a number that comes on top due fysical laws.

You and other programmers said that rand(); chooses low numbers, my data so far confirms that...even the ones i didn't post. Why?

You must misunderstand. We did not say that, we said it may or may not produce them. It may also produce more high numbers than low numbers or numbers in the middle. But somekind of uniform patterns are most likely created in same time.

Having a output of 30 random numbers is not valid data in any means to do more research of how rand function works. It requires thousands, or even more rounds to create a random number to really see the output and be able to validate the data.
 
Back
Top Bottom