Taking a final exam in a course is not really a celebration cause for most students, but it is just not as fun to be on the other side as exam checkers. It can easily take several days to grade the answers for the same question for hundreds of students. The exam checkers would probably tell you that by just reading a few lines from the answer they know if it is going to be (mostly) correct or not, however they still need to read all of it just to be sure. These couple of lines can still be misleading – a good start can still lead to a complete failure, and a bad start might eventually, somehow, get most of the details right.
So maybe when checking exams we cannot really do it, but are there problems where reading only few lines from a proof is enough to determine its correctness? As it turns out, certain problems have innate structures that makes them more “stable” than others – if you write a “wrong” solution for them, then you can see it almost everywhere, so reading just a few lines will be enough.
One of the best examples for this kind of problem is the linearity testing. Suppose that someone claims that a function is linear, and to prove it they show that for all . What Blum-Luby-Rubinfeld (BLR) showed that in a sense it is enough to know that one of this conditions (chosen randomly) is true to conclude that all of them are true with high probability. In this post we will try to see what is so important about linearity, and which structures hides in this problem, to make this miracle happen.
In the previous post we considered a simple domino game where we needed to cover a board with dominos. We wanted to find a way such that if a board couldn’t be covered, then we could prove it easily, and this led us to the definition of the game invariants. These are “elements” that do not change as the game progresses, so if the initial and final state have different invariants, then we could not reach from one to the other.
In this post, we will deal with a new game – the Peg Solitaire – and see if and how we can find such invariants for this game.
Sudoku is a simple game that was invented more than a century ago, but only started to get famous about 20 years ago, and suddenly it started appearing in the game sections of many papers, you could buy booklets full of Sudokus, there are Sudoku websites, applications and many more.
If you ever played Sudoku, you know that it can sometimes be very hard to solve, and it can become even harder when we start to use other variants of the game, like larger Sudoku with blocks. While it can be hard and time consuming to solve, in general the sudoku games that you see in the game sections are not impossible to solve. More over, once you solve a Sudoku puzzle, it is very easy to prove to other people that you know the solution – just show them the solution and they can check its validity quickly. However, what happens if you are given a Sudoku puzzle that can’t be solved? How can you even check that it can’t be solved, and if you know that it is unsolvable, can you easily prove it to other people?
The goal of this series of posts (next one is here) is to consider games like Sudoku (though a bit simpler), and try to learn how to find and use tools to show that in some cases they are unsolvable.
We have now seen in the previous two posts a few results about prime numbers. In the first one, we saw why primes are so important and used Euclid’s proof to show that there are infinitely many primes. Moreover, we conjectured that if and are coprime, then there are infinitely many primes with . And to top it all, we also conjectured that they are evenly distributed among the integers which are coprime to .
Writing conjectures down is much easier than actually proving them, and in order to do this, in the second post we reproved the fact that there are infinitely many primes, this time using Euler’s proof. The main idea was to consider the infinite sum and to show that it diverges to infinity and therefore there are infinitely many primes.
The set of prime numbers has very little structure that we know of, and actually what we are trying to do is to find some of this structure. For that, we moved step by step to worlds with more structure, in which we have many more tools that can be used to tackle this problem – from the set of prime numbers to the integers (which have addition and multiplication) and from there to the real numbers (which also have a metric, derivatives, integrals, etc). Then, if we are lucky enough, we can prove this claim in the world of real numbers and pull it back to the integers and then back to the prime numbers. In this post, we will find the last step which will allows us to go to even smaller sets of primes, according to their equivalent classes modulo , and prove Dirichlet’s theorem:
THEOREM: For any coprime integers, there are infinitely many primes with .
In the previous post we saw why primes are so important, and used Euclid’s proof to show that there are infinitely many primes. We further conjectured that not only there are infinitely many primes, they are also “nicely” distributed among the integers.
More precisely, if we fix some integer , then for every we can ask how many primes are there such that . If and are not coprime, then we showed that there is either zero or one such primes, so the infinitely many primes that we already know exist are equivalent to mod where the are coprime to . The conjecture was that not only the union over these is infinite, but for each the number of primes is infinite (which is Dirichlet’s theorem for primes in arithmetic progression), and more over, in a sense we have “the same” infinite in each of those . While we could generalize a bit Euclid’s proof to some other cases, it becomes really complicated quite fast.
Trying to look for other methods, we mix our prime numbers with a bit of calculus, and turn to Euler for some help.
The prime numbers are one of those basic, yet mysterious, sets in mathematics, that while we know much about them, there are still many interesting open questions waiting to be answered, including probably the most well known conjecture in mathematics – the Riemann conjecture.
While this conjecture, and the attempts at its solution, use very advanced mathematics, even the most basic questions about the prime numbers already have many interesting results and lead to interesting tools, and in this post and the next two (here and here), we will try to understand some of these just by trying to “count” primes.
The idea for this post came from a question I saw in a math help forum about improper integrals. While this problem has a very simple solution using basic tools in integral calculus, I want to show a more geometric approach which can be generalized much further (and for the Hebrew speakers among you, I made a video explaining this as well here ).
Imagine going to a new amusement park for the first time. Once you get there, you go to the first ride that you see. Once you finish it, you randomly choose one of the roads leaving it and follow it until you reach another ride. You continue like this for a long time – every time you go on a ride, then choose a random road (might even go back on the same road that you came) and walk on it until you reach your next ride.
After doing this for a long enough time, on which ride would you go on the most? Go on the least? And how are these questions connected to eigenvectors and eigenvalues?
The radar is a detection system that was developed before and during World War II for military uses, though by today it has many other applications including, for example, astronomical and geological research. The name radar is an acronym for RAdio Detection And Ranging, and as its name says, it uses radio waves to detect targets. In this post we will try to understand one of the basic ideas behind the operation of the radar, which while it seems quite simple at first glance, when we start to try to understand the details, we encounter an interesting problem. In order to solve it we will need to use a mathematical theorem first formulated by the Chinese mathematician Sunzi from the 3rd century AD.
Mathematical research is something that most people don’t really understand. They can imagine someone in a lab mixing chemicals or doing experiments with some scientific machinery, but mathematical research?
The goal of this post is to share with you a small part of this process – how we start with a simple problem, what happens when we complicate this problem, and how eventually we get what mathematicians call a research problem. For those interested, I made a video about this subject which you can see here and a Hebrew version here.
The square billiard table
This problem’s origin is in the well known game of billiard. While in a general billiard game there are many balls, we will simplify it and have a square billiard table and a single ball. We then hit the ball and assume that it moves in a constant speed (so it doesn’t lose any momentum), and when it hit the edges of the table, it bounces back they we expect it to.