Thursday, October 6, 2011

To: rememberingsteve@apple.com

My first computer was an Apple II Plus that my dad brought home as a Christmas present when I was in 3rd grade. Although not as immediately user-friendly as computers would become, the machine that Woz built just plain worked. Always. And it defined for me a sense of taste for what technical and design excellence really meant.

So for me, Apple was the only computer that mattered, until about 1995, long after Woz and Steve Jobs had gone, and Apple products had become just another commodity item. I bought a $4000 Mac that was so unstable (System 7) that I swore off Apple products for life. By this time I was being paid to program computers, and I figured that if I was going to be miserable, I might as well be miserable on the same awful machines that everyone else used. Computers were no longer sources of imagination and joy, but just another lifeless tool we could use to turn $1 into $1.10.

I was happy to see Steve Jobs return, but I was still skeptical, even when the first iMacs and iPods made me believe that perhaps there was once again a group of people in this world that cared about making something useful and beautiful. It wasn't until Steve Jobs presented the first iPod Nano, when I had the same emotional reaction I had when I was 8, when I first saw an Apple II: "I have to have that, and I have to have it now."

Not long thereafter I bought a 17" Macbook Pro, having heard good things about Mac OS X, and having been frustrated to the point of tears with a Dell laptop that would periodically decide to ignore me as it ran its anti-virus software. I'm typing this on that same Macbook Pro, which is running just fine 4 years later.

I never met Steve Jobs. I don't even know any of the engineers at Apple. But it has mattered to me in my life knowing that somewhere in this world there is a group of people that cares about making useful and beautiful things. That technology can improve the human condition. That people can strive for excellence, and achieve it.

Sincerely,
Kurt Christensen

Saturday, October 3, 2009

Learning Things of Lasting Value

In the book "Seeking Wisdom", Peter Bevelin describes the ways in which we make bad decisions, why we make bad decisions, and what we can do to avoid making bad decisions. The author synthesizes lessons from a number of sources, focusing especially on Warren Buffett and Charlie Munger, the leaders of Berkshire Hathaway (a successful holding corporation of which you may have heard).

In listing the methods for improving the quality of one's thought process, Mr. Bevelin elaborates on the method of "Simplification" under the sub-heading "Focus on what you can know and that makes a difference":

"What's knowable and important? And what can be translated into useful action?"

At the ripe old age of 38, working within a technology industry that changes daily, this question is gaining in importance for me. I have lost the energy, and patience, and interest required to stay abreast of technological flavors-of-the-month. Ruby on Rails? Honestly, I only barely give a shit. Java 7? Flex? Silverlight? These things are only valuable until the powers behind them decide I should learn something newer and better, and that's only six months away.

So, to remain effective over the long haul, I am required to focus on a few things that I believe can have lasting value. And here's my list:
  • The fundamentals of computer science, as laid out in the book "Structure and Interpretation of Computer Programs", by Abelson and Sussman
    • Why? Because everything you'll ever really need to know about computing is contained in this one book. Seriously.
  • Fundamental algorithms and data abstractions
    • Why? Because our job as computer programmers is to think, and algorithms and data structures (along with metaphor) are the raw materials. You can't be an effective civil engineer if you don't understand the properties of concrete, and you can't be a computer programmer if you don't understand the ways in which you can structure and process data, and the processes themselves (which are really just another form of data).
  • The Unix operating system - how it works, and why
    • Why? Because flavors of Unix - Linux in particular - have won. I have mixed feelings about this, in that it hardly seems the best we can do; however, Unix appears to be the once and future king. This may seem a ridiculous statement, given the continuing market share of Windows, but it's true. I have no special hatred for Microsoft, but they're locked on a trajectory that ends badly, much as GM was in, say, 1995.
  • Proficiency in at least one flavor of the Unix shell - the Bash shell, probably
    • Why? Despite the ancient clunkiness of the Unix command line as a means for interacting with a computer, there is an important way in which it was - and is - decades ahead of its time: Unix gave us the notion of assembling small powerful tools into larger, more powerful tools. Other areas of computing have only barely begun to explore this design space (e.g., web mashups), but it will become more important as computing becomes more heterogeneous and complex. As a practical matter, you can get a lot done with some basic shell scripting skills, and if you know Bash, you can probably apply that knowledge to other, less powerful shells (e.g., DOS).
  • Proficiency with a universal text editor (e.g., vi or emacs)
    • Why? Because humans interact with computers via text, and will be after I'm dead. I never learned to type properly as a teenager, because I was assured that keyboards were going to be obsolete in 20 years. Well, I'm no longer waiting for that one to become true (a note to naysayers: explain how a voice interface will work in an office environment). Consequently, the need for typing skills and proficiency with a text editor aren't going to disappear anytime soon, and for longevity's sake we should probably choose the lowest common denominator in text editing that still affords the power we need, and that's probably vi or emacs. I wish that weren't true. But it is. You can learn something else (I myself prefer IntelliJ), as long as you accept that you'll be throwing away that knowledge and learning a new tool at some point in the next few years.
  • Proficiency with C
    • Why? Because C remains the lingua franca for communicating with the underlying hardware, and if you can't speak this language yourself, you'll always be intimidated by and limited by those who can.
  • Proficiency with Lisp (or some dialect, such as Scheme or Arc)
    • Why? Because it's a medium for expressing thought, similar in linguistic power to mathematics. There are many other computer languages, of course, but no other language combines the ability to express functional abstractions, data abstractions, and metalinguistic abstraction. For the time-being, Ruby or Python may be an "acceptable Lisp". But beyond that, I have no burning desire to learn Ruby or Python, because they have nothing to teach me which I can't learn from Lisp.
  • Proficiency with SQL
    • Why? Because relational databases probably aren't disappearing anytime soon, and so you'll need to write a query occasionally if you expect to get anything done. More importantly, however, SQL offers a different way of thinking about computing - in terms of sets - and thus has something to teach.
  • Agile values, principles and practices (at least until something better comes along)
    • The word "agile" is dead. But many of the underlying values will remain, and rightly so. Chief is the idea of "failing fast" - iterating to better understand the product one is building, and to get better at building it.
In compiling this list, I realized that my age doesn't really matter - that in any endeavor, we should focus on the underlying values and principles before we get distracted with the latest fashions, because only then can we understand and intelligently evaluate the latest fashions.

So, that's my list, until someone gives me a better one. What's on your list?

Thursday, January 22, 2009

How to Debug

I once worked with a wonderfully talented programmer named Cuong Tran, who has an almost uncanny ability to determine the cause of bugs, and then fix those bugs. He once explained to me how to debug code: "When you're trying to solve a problem, strip away everything and make it absolutely as simple as possible, until you have something that works. Then add stuff back, bit by bit, until you see the problem." Well, obviously. And yet I keep having to learn this lesson over and over again. Most recently, the teacher was another wonderfully talented programmer (Brian Ericson), and the classroom was Redmine.

Redmine is a lightweight collaboration tool based on Rails. It includes a wiki, forums, bug tracking and more, and yet still manages to feel small and simple and easy to configure and use. At least, it felt that way, right up until I tried to get LDAP authentication happening. Following the instructions on the Redmine wiki, I was having no success, and the log file was giving precious little information, even with debug output. Why wasn't the damn thing working??

Fortunately, Redmine has two helpful personality traits: (1) It comes with its own source code, and (2) the source code is Ruby, which means we could very easily add our own debug output - or even modify behavior - to learn what was happening. And this enabled us (and by "us" I mean "Brian Ericson") to strip away everything until we had something that worked.

After some poking around, it became apparent that the trouble was in auth_source_ldap.rb. Inside this class was simply the Net::LDAP class - shrink-wrapped Ruby. So just forget about Redmine for a minute - could we simply call the search method on Net::LDAP, and get something in response? We (and by "we" I mean "I") had almost given up when we (and by "we" I mean "Brian") discovered that yes, we could - but not with the sort of configuration information that I had provided.

The Redmine wiki implied that one simply needed to provide a username and password to connect to the LDAP server (assuming the LDAP server doesn't allow anonymous access, which mine does not). However, anyone familiar with LDAP - a club which did not include me, until yesterday - knows that you need to provide some context. Specifically, you need to chant the proper LDAP incantations to tell the LDAP server where in the directory hierarchy it should search for that user. Thus, instead of this:









NameOur LDAP
Hostldap.our-company.com
Port389
Accountkurtc
Password********
Base DNou=People, ou=Root, dc=our-compnay, dc=com


...Redmine requires this:









NameOur LDAP
Hostldap.our-company.com
Port389
Accountuid=kurtc, ou=People, ou=Root, dc=our-company, dc=com
Password ********
Base DNou=People, ou=Root, dc=our-compnay, dc=com


It all makes perfect sense, when you ask the right questions in the right order: In Ruby, what's the standard way of searching for user information in an LDAP server? How is Redmine using this code? And are we passing in the expected arguments?

Once we simplified the problem enough to ask the right questions, the problem was obvious: we (and by "we" I mean "I") wasn't passing in the right "Account" string, which needed to be the proper LDAP incantation, as opposed to a simple username.

So there it is - how to debug. Just strip away everything and make it absolutely as simple as possible, until you have something that works. Then add stuff back, bit by bit, until you see the problem. The details are left as an exercise for you, dear reader.