Another reason to use Ada: Ada provides features for the computer to catch some kinds of typical human errors.

It is specifically important for mathematical programming as in this field we are sometimes unable to check an algorithm by testing it but instead need to prove its correctness mathematically (just like as we check correctness of a paper and pen calculations manually).

This and some other features makes Ada superior for mathematical programming.

]]>Perceptive remarks, but regarding differential forms, I would say that these really are one of the things where one benefits greatly from having them explained by a living lecturer. The difficulty often lies in the context, in matters such as “how one should think about them”, “why one should do X”, and “how things fit together”, rather than in details of proof formalia. The difficulty is in learning to apply the right perspective.

Categories can be like that too. As can C compilers.

]]>I hope it doesn’t stick around very long. ]]>

— Alan Perlis (ref: http://norvig.com/21-days.html)

While I more or less agree, as with all aphorisms, counterexamples are legion. In this case, the usual suspects are “environmental factors”:

– Lack of library support and the like: the so-called “Turing tar-pit”: While Mathematica can solve systems of polynomial equations “out of the box,” Gröbner basis algorithms could also be implemented in, say, Brainf**k. These alternatives are only equivalent in theory. Contrarily, it’d probably be easier to implement the Risch algorithm in ARM assembler than it would be to convince Wolfram to port Mathematica to some random embedded platform.

– Communication overhead: No code is an island. It might be easier to learn C# than to code up a translator in GRADSTUDENT.

– Politics: rightly or wrongly, there exist “powers that be” that, on occasion, demand a particular implementation language. Even in the “wrongly” case, the shortest distance between two points is often through compliance.

And so on. But your comment that you “just want to learn some language, after which it should be easy to adapt to another one” is perceptive and correct; the risk you might pick up some “bad habits” along the way is, in my experience (IT and CS, mostly), overestimated. The reason why you can’t teach an old dog new tricks is not that he is incapable, but that he doesn’t care to learn. When it comes down to it, “bad habits” are simply “the devil you know,” and a sort of skeptical conservatism is not only understandable, but commendable in such a marketing-driven field as information technology. A more elegant proof does not invalidate a messy proof; that a “better way” exists does not make all others senseless folly.

The most important thing, I’d say, is that you’ll be “getting your hands dirty,” one way or another. By way of analogy, consider this: recently, I was in a so-called “advanced calculus” course, moving a bit more slowly than anticipated by the professor and the syllabus; one of the students was clearly frustrated because he “really wanted to learn differential forms.” The syllabus and textbook ended with Stokes’ Theorem; everything was in R^n, so we’re really just talking about translating what he “already knows” into “another language,” and maybe kicking it up a dimension or n because we can. Assuming he met the prerequisites for the course, viz., linear algebra and multivariable calculus, I never quite figured out why he was waiting for what he was waiting for. If he didn’t know the FTC or Fubini’s Theorem, he wouldn’t have been there; the algebra is easy (d^2 = 0, f*dx = df*x = d(x o f)?), and partitions of unity are just a matter of working through a few simple proofs. “A little thought,” Thoreau once wrote, “is sexton to all the world.” In this case, the necessary thought is trivial: whatever integrals he needs to get the hang of things; whatever proofs he needs to understand why they work out? But he didn’t know this, as he never bothered to try. “Not everyone things as you do,” I’m told, and that I overestimate people. But if he wants to learn forms, and he’s capable of undergraduate calculus and algebra, what, exactly, am I overestimating? I’m not expecting he’ll _invent_ the bloody things, or work out their applications to cohomology or mathematical physics while he’s folding laundry. And I’m not that bright — as I’m constitutionally incapable of memorization, I doubt I’d have been able to pass undergraduate calculus if I _hadn’t_ known forms.

To abuse the analogy a bit, an equivalent “bad habit” might be using the Hodge dual to work out the curl of a vector field in spherical coordinates in order to apply the classical Stokes’ Theorem in R^3 in terms of forms. Pointless, perhaps, though educational in more ways than one. And, from a strictly practical perspective, whatever works, no? But this is hardly a reason to avoid learning calculus in the first place. After all, where would Cartan have been had Newton refused his education on account of a potential “better way?”

]]>Fredrik, actually it’s recommended to both ask for all warnings and warnings 😉

(No, there’s no missing word in that sentence.)

Take these command lines:

1) gcc -W ….

2) gcc -Wall …

3) gcc -W -Wall …

One would think that #1 turned on warnings and that would be that. As it turns out, there are many, many warnings in the gcc compiler and lots of macho programmers didn’t like to be told that their code was bad, so -W only turns on some of them (those with the lowest false negative rate, roughly).

#2 then turns on all the warnings, right? No. Some of them pissed some people off too much so it only turns most of them on.

#3 turns all of them on (as far as I know).

For anal retentive reasons, the gcc programmers are slowly deprecating -W in #3 in favour of -Wextra but the rest of the world pretty much ignores that.

Summary: always use ‘-W -Wall’ with gcc unless you have a really good reason not to. And Python is a really good language (if a bit slow for certain things). Highly recommended. And there’s work under way to make it faster (Unladen Swallow, PyPy) so it’ll probably be faster in a couple of years.

]]>Java is like a simplified version of C++.

And Python is supposedly what most people say is the easiest programming language these days.

]]>Only, don’t use the tutorial sigfpe links, use the immortal LYAH http://learnyouahaskell.com/

]]>http://hackage.haskell.org/platform/mac.html

And if you have any questions, let us help: http://www.reddit.com/r/haskell

]]>For a mathematician who isn’t really interested in the ugliness of #include etc., I’d recommend J. It’s an interpreted language and even more terse than mathematical formulaic language. I never cease to be amazed at the breathtakingly short solutions to Project Euler problems that are done in J. Of the first 30 problems or so, many involving stuff like palindromic primes, most had J solutions no more than 2 lines long. One reason why mathematicians like J is that J handles matrices like vectors like scalars with no extra effort, like handcrafted loops.

Yes, the J language complete with development environment is free and available for the Mac.

]]>Prof. Gowers, whether you decide to continue your journey with C or (better) Python, I would highly suggest that you check out Project Euler (http://projecteuler.net).

Most of the initial problems can be resolved with rudimentary knowledge of any programming language. I personally love how some of the problems cannot be resolved with a naive, brute force algorithm, but require a certain degree of mathematical insight. As a mathematician, you will no doubt love this site and its exercises.

Another interesting programming language for you to learn at a later stage would be Haskell. You’ll probably love the language for the same reason why mathematically challenged programmers hate it.

For example, consider the following line of Haskell code:

[n | n <- [1..], n `mod` 3 /= 0]

You can probably guess what this does, considering that it's so close to mathematical notation. The line above represents an infinite list of natural numbers that are not multiples of 3. Thanks to lazy evaluation, we can play with data structures that represents countably infinite sets.

For example, in Haskell I could write:

take 5 [x | x <- [1..], x `mod` 3 /= 0]

And I would obtain the first 5 natural numbers that are not multiples of 3:

[1,2,4,5,7]

This is a simple example, but it should give you a taste of the nice stuff that is possible in Haskell. For the time being, I'd suggest focusing on the essentials though. C or Python will definitely help you build a solid foundation on how to instruct the computer to do the calculations you require it to.

Finally, should you have any programming questions, feel free to drop me a line by email any time, and I will be more than happy to help.

]]>They take perhaps 5 times more time to write and debug an average complexity code, say, something like an algorithm for finding a maximal weight matching in a bipartite graph, resulting in a code that is perhaps 2 to 5 times faster than the one written in, say, Python. OK, so you have to wait for answers a little longer, but you would say much more time in coding/debugging. And there are easy ways to call these special-purpose packages written in C/C++ from Python, should you need one.

]]>I had a recent project: compute the flag vectors of all the full dimensional 01-polytopes in dimension 5. Fortunately, a list of such polytopes (there are over a million of them) has already been computed.

I modifed the C-program that accesses this list (it is binary data) to produce the list in the form I needed.

I used a little bit of Perl so I could use polymake (which can be scripted with Perl) to compute the flage vectors and print out the results.

I used Python (and in particular numpy) so that I could do some calculations on these flag vectors. I also used Python to glue everything together.

Next, I want to determine the rank of certain quite large sets of flag vectors, but numpy doesn’t provide this. Nor does it provide arbitrary precision integers. So I might find myself using pari/gp for that bit.

So, in part, my advice would be to use the language that already has a community doing calculations similar to the ones you want to do.

By the way, it took about 15 Ghz days to compute the flag vectors. I enjoyed the GRADSTUDENT joke. Even the best use it sometimes.

From Knuth’s ‘Digital Typography’, p505 “I was time to revise the draft of 13 May [1978] so that research assistants [grad students] Michael Plass and Frank Liang could prepare a prototype implementation [of TeX] while he [Knuth] was away [on a month-long trip].”

]]>Start by setting up a work environment. It should at the very least allow you to easily compile, run, and test your program. If these three tasks are hard then you do not have a proper working environment.

Your program must compile at all times. If the compiler reports an error, never fix it by randomly changing code in the vicinity of the error. Fixing the code without understanding what was wrong in the first place is worse than not fixing it at all: You missed an opportunity to learn. If you have a big pile of code with <10 errors then it’s like trying to learn how to swim by asking someone to drop you off in the middle of the Atlantic. You won’t learn to swim. Most likely you’ll just die. So don’t do it. Write your program incrementally. Keep it compiling. Run it from time to time and check that it does what you expect. This means that you should think about what you expect before running the program. Any mismatch between what you expected to happen and what happened is an opportunity to learn. Don’t miss it. Try to understand what is happening.

Whenever you have an issue you can’t explain it helps to try to reproduce it on the simplest example. If the issue is a compilation error, then make a copy of your program and then systematically trim down the code in a way that preserves the error. Do this until you get to the smallest possible program that still exhibits the error. That is, there’s nothing more you can trim without making the error go away. By the time you get to this step you probably know what was wrong, you can fix the problem in the small example, test, and then repeat the fix in your big program. If the example is tiny and you still don’t know what’s wrong then ask in the forum (or a demonstrator if you are in the lab).

You should test your program as it grows by running it. Once it approaches what you think is close to a final solution your testing must become more thorough. Do not just run the program, type in the input, and look at the output on the screen. Write all your tests (inputs and outputs) in files so that you can retest easily after you change the program. [The point here was that it should be easy to re-run the tests. There are fancier ways of doing it than just putting the input/output in paired files.]

What to do if the program does not do what you expect? First, do the easy thing: Run the compiler with all warnings turned on (-W -Wall -pedantic) and check if any causes the problem. In any case, get rid of them. Then run your program with valgrind to make sure it’s not the fault of nasal demons. If it is, then try to isolate the problem in the same way as you isolate a compiler error: Trim down a copy of your program.

There are two basic debugging strategies: (1) read the program carefully and (2) trace your program for particular runs. The latter is easier but as you become more experienced you’ll see that the former is much faster for simple bugs. Anyway, how do you trace a particular run of you program? Read the program having in mind a particular input. At each step think what are the values of the variables. If you feel lazy (and that’s often a good thing when you are a programmer; we are after all in the business of automating stuff), then put printf statements that print the values of variables at intermediate points and check when their values diverge from what you expect. Another option is to use a tool like gdb that lets you go step by step thru the program.

]]>C is good at controlling what the computer does in detail. That’s *good*: (1) you can tell the computer exactly what to do and (2) you understand better how the computer works. It’s also *bad*: sometimes you have to specify details you really don’t care about.

Binaries aren’t that portable, so I wouldn’t say that’s an advantage of C.

]]>My opinion about the programming language to learn: One at a time, and don’t stop. 🙂 It gets quite fast to grok a new language after a time. The slow part is fitting all the available libraries into your head, which you need to do only for the language you prefer and use most often.

SICP was already mentioned. It’s probably good if you decide to learn some LISP variety. HTDP has a similar style and it’s also using Scheme (a LISP variety). The content is quite simple and every working programmer should be familiar with those concepts.

Having read your brief intro to math, I suspect you’ll like Knuth’s TAoCP. For example, Section 1.2.2 describes logarithms in a very similar way you do. One exercise for that section asks for an algorithm that computes b^x for 0≤x<1 that uses only shifting, addition, and subtraction. The main benefit for you would probably be finding (more) connections between what you know (math) and (low-level) programming. TAoCP, however, is not a good place to learn a programming language. So it doesn’t directly address your goal. I just couldn’t resist mentioning it. 🙂

I agree with timur that solving (preferably easy) problems on TopCoder is a great way to learn C/C++, Java, or C#, especially because you can view countless solutions written by others. The best way to try it to make an account, install the Kawigi plugin for the Arena, and go to practice rooms. (Setting this up is an example of a situation in which a (CS) grad student would be useful.)

]]>I ran the program in IDLE (Integrated Development Environment for Python),

“IDLE is a basic editor and interpreter environment (…). Good for beginners, it also serves as clear example code for those wanting to implement a moderately sophisticated, multi-platform GUI application.”

]]>What about an experimental functional logic programming language, based on the Haskell language: magicseries.curry http://www.curry-language.org/ AquaCurry http://danae.uni-muenster.de/~lux/curry/

]]>Later on, you might want to have a look at Haskell. I recently spent quite a lot of time getting acquainted with it, which I certainly don’t regret. I think it’s not an exaggeration to say that learning Haskell will change the way you think about programming.

It encourages a way of thinking which should appeal to most mathematicians: a very high level of abstraction, strong emphasis on functions in the mathematical sense (with clearly defined domain and codomain and no side effects), higher-order functions that take other functions as arguments, building new functions by composing old ones without necessarily naming the arguments (like saying f=g \circ h instead of f(x)=g(h(x))), and so on. Like Python, Haskell can be run in interactive mode, so that you can experiment at the command line.

Showdown; identical algorithm, C versus C++: The 3% difference in C++’s favour was a surprise – it was a consistent, although small win over all the runs. http://unthought.net/c++/c_vs_c++.html

]]>I agree with : “Picking something and learning it reasonably well seems like a good strategy”

You can invest a lot of time learning a language and there is always a feeling that maybe your efforts could have been directed toward more positive ends.

If you choose to learn C, there probably isn’t a better book than Kernigan and Ritchie’s “the C programming language” – a very clean and concise description of the language which doesn’t take very long to read.

As others have pointed out, there is a wealth of highly optimised numerics code written in C (diagonalization of matrices, etc.) that is freely available from the gnu software project (along with bound volumes of the documentation for a small charge).

A working knowledge of C also makes it easier to learn C++ which is a far more flexible language.

However, C++ is also a much larger investment in time. For myself, it took me a few years before I started to get my head around the language (where I didn’t always wonder if there might be some far better way to write my particular bit of code).

To learn C++, there are two excellent books written by the creator of C++ (B. Stroustrup). The books are thick but don’t be put off by this – the author devotes quite a lot of space to explaining how the language works, and provides you with many different examples of how to write the same piece of code (which you will appreciate later).

Finally, you may hear somewhere down the road that C++ is an inherently ‘slower’ language than C or Fortran. I don’t think this has been borne out over time, and if I had to learn C++ all over again, I would have liked it if someone had suggested to me the eye opening little book by Bulka and Mayhew (“Efficient C++: performance programming techniques”).

As far as graphics go, a lot of people seem to be using the ace graphics program (I think it’s called ‘xmgrace’) which is also free, and practically self explanatory (I have the mac version). The gnu project distributes a simpler graphics program called gnuplot.

]]>