What is the Birch and Swinnerton-Dyer Conjecture – Part 1

Yesterday, I watched a talk by Manjul Bhargava called What is the Birch and Swinnerton-Dyer Conjecture?. I think he did a really good job in explaining the conjecture on a level that can be understood by a non-expert yet mathematically inclined person.

In the caption, there is a link to these lecture notes (not written by Bhargava himself). I thought it might be interesting to go through these lecture notes and explore the territory of topics mentioned in the talk (especially in the introductory part of the talk). Therefore, I will walk through the lecture notes in this blogpost (and one or two following posts) and try to look into the mentioned theorems in a bit more detail.

Continue reading


Walking the stack in C++

Recently, I asked myself how I would print the call stack of my program. After reading a bit about this, mainly in this document, I decided that I wanted to try it out. Here is my minimal solution (which is not recommended for use in practice).

The basic idea is that once you have a valid game pointer you can get the base address of the previous stack frame and the current return address. These are just the next words down the stack. To get the initial frame pointer, I use gcc’s built-in function for that. Theoretically, it should be possible to get the initial frame pointer via a pointer to the first stack variable of the function, but this method seems unreliable because the compile might deviate from that convention.


void testStack()
    Dl_info info;
    uint64_t* esb = reinterpret_cast<uint64_t*>(__builtin_frame_address(0));
    uint64_t* ra = reinterpret_cast<uint64_t*>(__builtin_return_address(0));
    uint64_t* saved_esb;
    std::cout << "Return address according to gcc: " << ra << std::endl;
    std::cout << "--------------------------------------------------------------------" << std::endl;

    while (*saved_esb != 0)

        saved_esb = esb;
        uint64_t my_ra = *(esb+1);

        std::cout << "Value of current stack base pointer: " << esb << std::endl;
        std::cout << "Return address according to me: " << std::hex << my_ra << std::endl;
        dladdr((void*)my_ra, &info);
        std::cout << info.dli_fname;
        if (info.dli_sname)
            std::cout << ", " << info.dli_sname;
        std::cout << std::endl;
        std::cout << "Value of previous stack base pointer: " << std::hex << *(saved_esb) << std::endl;
        std::cout << "--------------------------------------------------------------------" << std::endl;

        if (saved_esb != 0)

            esb = reinterpret_cast<uint64_t*>(*saved_esb);




Here are, for reference, the documentation pages for the functions that I’ve used:

How to write a 21st century paper

tl;dr: In this post I point to two ideas: How to write proofs by Leslie Lamport
and how to write reasearch papers by Simon Peyton Jones.

Last week, we had the Heidelberg Laureate Forum here in … well, Heidelberg. I did not participate but I like to read about the event and in particular its invited laureates, as for example on their blog.

One talk that I liked particularly was the one by Leslie Lamport. (Btw.: Some people like to call him “The Turing award winner and father of LaTeX”, but given his publications list that’s both too unspecific and too narrow.) Anyway, I liked his talk about “How to write a 21st century proof”. There is also a paper about the same idea.

I liked the idea of clearly structured proofs. He demonstrates this idea with a proof of a corollary of the mean value theorem in calculus. He does this by taking a conventionally written prose proof from a text book and adding structure and names to it. Pretty much as one would do when trying to teach a computer (proof assistant) the proof. Eventually (in the paper) he even goes all the way to define the proof in his own specification language (TLA+) which can be used as a proof checking language. So he goes from conventional (mostly prose) proof to structured and better readable proof on to not-readable-for-humans-but-for-computers proof. And the middle step is the one I liked very much.

The particular proof that Leslie Lamport has picked for his talk and paper has a linear structure. Therefore, it is easy to enumerate the steps of the proof as a list. However, some proofs don’t have this linear structure but still lend themselves to a depiction by other directed acyclic graphs (See for example this rewrite of Szemeredis theorem by Terence Tao, page 8). But the general idea of dividing a proof into smaller (how small) steps and specifying all the assumptions and conclusions of each step, linking them to a directed acyclic graph is a very natural way to do this.

The difficult questions are of course how small should the steps be and how much detailed should be added to each step. But Lamport also answers this question: Make it hierarchical and use modern tools (in this case hypertext). This way one can expand or collapse each part of the proof and hide details from the eyes of sophisticated readers. If one cannot use hypertext, the only solution is to make an assumption and imagine a particular reader. This reader can be a little curious kid (as in the paper’s proof from the freshman calculus book) or a fields medalist reading your latest research article.

For me personally, it sounds like an interesting idea to extend the idea of less prose and more structure to scientific articles. But of course, this has already been done by Lamport’s fellow Microsoft employee (and Haskell inventor (again the question: is this fair to him or the other Haskell inventors)) Simon Peyton Jones in his article and talk How to write a great research paper.

One thing that I don’t like about papers can be captured by the following quote from Jones’ talk:

“Computer programs often have bugs. It is very important to eliminate these bugs [1,2]. Many researchers have tried [3,4,5,6]. It really is very important.”


[Better] Example:
“Consider this program, which has an interesting bug. <brief description>. We will show an automatic technique for identifying and removing such bugs”


I also agree with Jones in that I don’t like it when authors put the related work section in the beginning and therefore build a hill or wall over which the reader has to climb to get to the actual results. I think a good approach to this problem would be the same way that Lamport proposed in his talk: Add structure and hide the details in the lower levels of the hierarchy.

If I find the time I will try to exemplify this at few papers from my field.

The destructive starting phase

Every beginning is difficult. I always feel the implications of this sentence when I want to understand something completely new. This something can be a mathematical concept or a piece of code (software or hardware) written by someone else or my former self.

At the beginning you have to de-construct the work of the other person. What they have built with their creativity, you have to persistently un-build with your intelligence to understand their creative process if you later want to be creative with what they’ve done.

That’s a fundamental problem with human creativity (in certain disciplines?) that you have to go through the process that others have already gone through in order to augment their achievements. Unfortunately, you cannot just continue working with the knowledge of the other person that has been there before. There is no magical transfer of state of mind. All you can do is absorb what they’ve done and get into another, similar state of mind that allows you to reproduce and the surpass their results.

And sometimes that just feels hard. But it feels even harder if you allow yourself or other people to put time pressure on your project. This process is never fast and predictable and you never know how long it will take you to understand the creation of other people.