One thing to remember is that arrays are just pointers under the hood. Even if you don't use them directly, or are always using smart pointers like std::shared_ptr<T>
, they're still there.
For example, accessing the following array:
int foo[3] = {1, 2, 3};
// foo is identical to int*, except the type contains a size
foo[1] == 2;
*(foo + 1) == 2;
Realistically in modern C++ you could likely avoid raw pointers entirely. C++ references cover a ton of the pointer use cases. I'd say the main goal is to prevent data from being copied around needlessly, since that takes more time and memory bandwidth.
99.5% would still be e^200
numbers checked (7x10^86
). According to the Quora link in my other comment, we've only calculated primes in sequence up to 4x10^18
as of 7 years ago. 95% is very doable though.
Edited to correct first N primes vs primes up to N.
We got nerd sniped at almost the exact same time, but approached this in very different ways. I applaud your practical approach, but based on what I calculated, you should stop now. It will never reach 99.999%
A few calculations:
- There are 9592 prime numbers less than 100,000. Assuming the test suite only tests numbers 1-99999, the accuracy should actually be only 90.408%, not 95.121%
- The 1 trillionth prime number is 29,996,224,275,833. This would mean even the first 29 trillion primes would only get you to 96.667% accuracy.
- The density of primes can be approximated using the Prime Number Theorem:
1/ln(x)
. Solving99.9995 = 100 - 100 / ln(x)
for x givese^200000
or7.88 × 10^86858
. In other words, the universe will end before any current computer could check that many numbers.
I got to this same result on my 3rd attempt, and had to throw in the towel. Everything's randomized every time.
Here's my attempt with Rust:
106 Characters:
fn a(n:usize){for i in 1..=n{println!("{:2$}{}", "", "* ".repeat(i), n-i);}println!("{:1$}| |", "", n-2);}
To be fair, I used to work there, and not even Microsoft understands their docs.