The Economics of Programming

It’s commonly known that if you want to be a software developer, you have to stay up to date with the latest technologies. What often isn’t explained is what this even means, since there are now a tremendous number of specializations within software development. It makes no sense to keep up with everything since 1) you can’t and 2) you probably don’t want to in the first place.

Programming is full of fanboys and cults. There are tens of millions of programmers worldwide. You get to enjoy some personal creativity in your work, which is nice, but you’re a cog in a much larger machine, and even though there’s a modicum of status in being able to call yourself a “Software Developer”, it’s a sobering reality that you are still relatively replaceable, your lines of code may outlive you, and nobody will ultimately care. Long gone are the glory days of Big Metal, when simply being a student in the Mathematics or later Computer Science department at a prestigious university almost guaranteed you a place in the history books. No. You are one among millions. Your work may be valuable (you’re certainly paid along those lines), but that’s only a factor of so many people needing software of one kind or another.

Enter the cults of thought. Smart people, over many decades, made smart observations about how to best do good programming. Things change over time, yes, but you’d be amazed how many “new” ideas were first invented in the 1970s. There are a tremendous number of choices now in programming: which languages do I program in? Which frameworks do I use? Which formatting standards do I follow? How does my organization perform meetings? What is the best way to document code changes? There are as many opinions as there are software developers, and many people become extremely opinionated in the process. Text Editor Wars. Command line vs Graphical User Interface. SQL vs NoSQL. Mac vs Linux vs Windows. Agile vs Scrum. All this reveals is just how freaking difficult programming is. Not because programs are particularly difficult to write (the vast majority of people could learn), but because programs are typically written by teams, and you can’t simply shoot from the hip and do whatever you want, or nothing would ever get done. “Programming” breaks down into rules and standards. So if you want to be a programmer, get prepared for a heck of a lot of rules and standards.

Which leads us back to the main purpose of this post: how do you manage all of this? Where is the best cost/benefit?

Well for one, if you want to keep up with ALL of the new developments, this is going to cost you in time. You get to spent vast amounts of time, typically outside of work, just to “keep up”. Sounds like fun, right? You’re passionate about your work, right, so you’re just going to sell your body to the devil so you can be “up to date”, right? After all, gotta stay competitive and keep that LinkedIn profile loaded, huh?

Except that most companies don’t need you to be up to date on ALL of the new developments. I learned the latest and greatest of the .NET Core framework, only to change jobs to a company that used the slightly old-school .NET. Besides, every company develops its own standards. If the standards haven’t been updated to the newest technologies, it doesn’t even matter.

Besides, not all new technologies STICK. Back in the early and mid-2010s, a type of webpage called “parallax” hit the market. It was a fancy way of scrolling that used transitional elements that moved as you scrolled down. Sometimes you might see different parts of an image behind the scenes as you scrolled, other times things on the webpage would actually shift. It was kind of cool, and business managers lost their damn minds. “We need parallax NOW!” I can imagine them screaming.

And then it petered out. I suspect because people realized it didn’t actually accomplish anything, it was just “kinda cool”. There was a time when I felt like an idiot because I didn’t know how they did that, but I never needed to know, and I have never encountered a project where it was necessary. Leave it to the front-end shops.

So sure. You can learn the latest and greatest. But you can’t always count on it staying that way. Every time you commit to a new technology, you’re playing a small lottery. Is it worth your time? Maybe. Maybe not.

Of course, my parents knew a guy who only coded in one language, and he did that a long time ago. I guess something did finally work out for him, but you really can’t just learn one single language and hope that will cut it. Just because you wrote C code in the 80s doesn’t mean somebody is going to hire you as a web developer now.

The real question you have to ask yourself as a software developer is how much value you can get for the least amount of your time. A small minority of developers love learning new things whether they are relevant or not, so if that’s you, good for you. Have fun with that. For those of us who maybe don’t want to spend our whole lives writing lines of code that quickly become meaningless or irrelevant, you have to be smarter than that.

There are three general routes:

  1. You can learn a niche language/framework/technology and get really good at that. Although your employment prospects will be fewer in quantity, you may be able to lock yourself into some job security by simply being rare. You will also have a relativity narrow range of things to study.
  2. You can learn a broad-based language/framework/technology that is widely used and well-established. You might be very replaceable, but you will have plenty of job opportunities.
  3. You can learn the latest-and-greatest. If the language/framework/technology catches on, you could see an explosion in salary as employers fight for you. Business leaders can be dumb when it comes to technology: if everybody is eating the pie, they think they need a slice, too, even if it’s something lame like key lime. But the language/framework/technology could also be a bust, and you might find yourself looking back at years of irrelevant experience.

An example of #1 would be an Oracle Developer or DBA. You focus on…Oracle. Designing, configuring, programming. All Oracle. You even have some ready-made study material for your career path. That’s probably all you’ll ever do, but hey, if somebody is going to pay you to do it, why not? Another example from a different perspective might be a “front-end developer”, who focuses solely on the visual and behavioral aspects of website design.

An example of #2 would be a “full-stack developer”, meaning somebody who works with the database, the server, and the user interface. There are many different technology stacks that perform these functions, some which are more trendy than others. You will not necessarily be expert at any one of these, but you will use them all. This is very common.

An example of #3 would be a full-stack Node developer working with a Javascript-based NoSQL database. While this is slowly becoming more established, many Javascript frameworks have gone bust, and picking a new one can truly be a gamble. Plus, most NoSQL databases are slowly reverting to use SQL because they realized that not having a standard protocol for accessing data sucks. Another example would be a blockchain developer. Will this new technology pay out? Well, it’s certainly not going away, and it’s not just used for cryptocurrencies. Does it warrant its own sub-industry? Time will tell.

Personally, I’m not a gambler. I’m not interested in wasting my time learning something that may not make me money. Well, at least not with respect to programming. Learning something simply because it’s new sounds stupid to me. Yay, another Javascript framework! I’m so excited! Not. I couldn’t care less. I’m also not a #1, although the ability to feel like an expert is definitely appealing. My problem is with lock-in. You can very easily get stuck living in a big city or being trapped at your job because there is no other company in your area that uses that framework/technology. That sounds like an anxiety attack waiting to happen. It also robs you of the power to tell a bad employer to go screw itself, which, in my opinion, is a desirable power. Besides, your technologies may be harder to use on your own. Do I personally have any reason to run an Oracle database at home? HA HA HA. No.

And this is why I’m square at #2. I don’t mind being replaceable as long as I feel capable. I work with the Microsoft stack because Microsoft, at least as far as programming is concerned, tends to take big, thoroughly-considered steps before committing. They don’t get everything right, but C# has been around a long time, and it’s not going anywhere anytime soon. Ditto SQL Server. You can learn both of these, and very likely find a job somewhere. The front end stuff has been shifting every five or so years, and there are some interesting things coming out on the web page side of things, but I don’t mind learning new things, as long as I have a reasonable expectation those things will still be relevant in the future. It’s not an overwhelming number of things, either, it just takes a little elbow grease. I also like the idea of being able to get a job almost anywhere I go. Granted, some jobs are remote-only, and this will probably increase once the world makes it past this virus thing, but many, many companies still prefer you to be onsite. There’s no shortage of companies running the Microsoft stack. I’m a fan of flexibility.

Anyway, you can’t let the industry whims slap you around and tell you what to learn all the time. I don’t care how charismatic the voice is, some people are just plain wrong. Like anything in life, you have to weigh your options carefully and know what your goals are.