What concepts or facts do you know from math that is mind blowing, awesome, or simply fascinating?
Here are some I would like to share:
- Gödel’s incompleteness theorems: There are some problems in math so difficult that it can never be solved no matter how much time you put into it.
- Halting problem: It is impossible to write a program that can figure out whether or not any input program loops forever or finishes running. (Undecidablity)
The Busy Beaver function
Now this is the mind blowing one. What is the largest non-infinite number you know? Graham’s Number? TREE(3)? TREE(TREE(3))? This one will beat it easily.
- The Busy Beaver function produces the fastest growing number that is theoretically possible. These numbers are so large we don’t even know if you can compute the function to get the value even with an infinitely powerful PC.
- In fact, just the mere act of being able to compute the value would mean solving the hardest problems in mathematics.
- Σ(1) = 1
- Σ(4) = 13
- Σ(6) > 101010101010101010101010101010 (10s are stacked on each other)
- Σ(17) > Graham’s Number
- Σ(27) If you can compute this function the Goldbach conjecture is false.
- Σ(744) If you can compute this function the Riemann hypothesis is false.
Sources:
- YouTube - The Busy Beaver function by Mutual Information
- YouTube - Gödel’s incompleteness Theorem by Veritasium
- YouTube - Halting Problem by Computerphile
- YouTube - Graham’s Number by Numberphile
- YouTube - TREE(3) by Numberphile
- Wikipedia - Gödel’s incompleteness theorems
- Wikipedia - Halting Problem
- Wikipedia - Busy Beaver
- Wikipedia - Riemann hypothesis
- Wikipedia - Goldbach’s conjecture
- Wikipedia - Millennium Prize Problems - $1,000,000 Reward for a solution
x^n + y^n = z^n has no solutions where n > 2 and x, y and z are all natural numbers. It’s hard to believe that, knowing that it has an infinite number of solutions where n = 2.
Pierre de Format, after whom this theorem was named, famously claimed to have had a proof by leaving the following remark in some book that he owned: “I have a proof of this theorem, but there is not enough space in this margin”. It took mathematicians several hundred years to actually find the proof.
For me, personally, it’s the divisible-by-three check. You know, the little shortcut you can do where you add up the individual digits of a number and if the resulting sum is divisible by three, then so is the original number.
That, to me, is black magic fuckery. Much like everything else in this thread I have no idea how it works, but unlike everything else in this thread it’s actually a handy trick that I use semifrequently
That one’s actually really easy to prove numerically.
Not going to type out a full proof here, but here’s an example.
Let’s look at a two digit number for simplicity. You can write any two digit number as 10*a+b, where a and b are the first and second digits respectively.
E.g. 72 is 10 * 7 + 2. And 10 is just 9+1, so in this case it becomes 72=(9 * 7)+7+2
We know 9 * 7 is divisible by 3 as it’s just 3 * 3 * 7. Then if the number we add on (7 and 2) also sum to a multiple of 3, then we know the entire number is a multiple of 3.
You can then extend that to larger numbers as 100 is 99+1 and 99 is divisible by 3, and so on.
I find the logistic map to be fascinating. The logistic map is a simple mathematical equation that surprisingly appears everywhere in nature and social systems. It is a great representation of how complex behavior can emerge from a straightforward rule. Imagine a population of creatures with limited resources that reproduce and compete for those resources. The logistic map describes how the population size changes over time as a function of its current size, and it reveals fascinating patterns. When the population is small, it grows rapidly due to ample resources. However, as it approaches a critical point, the growth slows, and competition intensifies, leading to an eventual stable population. This concept echoes in various real-world scenarios, from describing the spread of epidemics to predicting traffic jams and even modeling economic behaviors. It’s used by computers to generate random numbers, because a computer can’t actually generate truly random numbers. Veritasium did a good video on it: https://www.youtube.com/watch?v=ovJcsL7vyrk
I find it fascinating how it permeates nature in so many places. It’s a universal constant, but one we can’t easily observe.
I came here to find some cool, mind-blowing facts about math and have instead confirmed that I’m not smart enough to have my mind blown. I am familiar with some of the words used by others in this thread, but not enough of them to understand, lol.
Nonsense! I can blow both your minds without a single proof or mathematical symbol, observe!
There are different sizes of infinity.
Think of integers, or whole numbers; 1, 2, 3, 4, 5 and so on. How many are there? Infinite, you can always add one to your previous number.
Now take odd numbers; 1, 3, 5, 7, and so on. How many are there? Again, infinite because you just add 2 to the previous odd number and get a new odd number.
Both of these are infinite, but the set of numbers containing odd numbers is by definition smaller than the set of numbers containing all integers, because it doesn’t have the even numbers.
But they are both still infinite.
Your fact is correct, but the mind-blowing thing about infinite sets is that they go against intuition.
Even if one might think that the number of odd numbers is strictly less than the number of all natural numbers, these two sets are in fact of the same size. With the mapping n |-> 2*n - 1 you can map each natural number to a different odd number and you get every odd number with this (such a function is called a bijection), so the sets are per definition of the same size.
To get really different “infinities”, compare the natural numbers to the real numbers. Here you can’t create a map which gets you all real numbers, so there are “more of them”.