Business Ping Pong

Ping Pong I used to be notorious at procrastination. I still do it, but I got much better at it. The problem was my thought process. We all think in terms of pain. We avoid new pain or seek to minimize existing pain. Procrastination is simply a mechanism to avoid new pain. What we fail to realize, however, is that same pain dwells in our mind. That mental to do list is eating away our brain capacity, causing distractions from other tasks. It’s a numbing pain that keeps coming back. It’s like storing clutter in your house, where the house is your brain. After thinking about my own procrastination, I realized that the pain of dwelling on a task is bigger than the pain of actually doing the task.

As I started tackling these tasks, it became easier to do them. The hardest part is starting. Indeed I got better at this once I started forcing myself to start. The other trick is that you don’t have to do it all in one go, once you start tackling the problem, it gets smaller. The other side of the coin is addressing tasks that involve other people. We tend to procrastinate our responses until the nagging from the other side becomes severe enough. But what if you were to handle the responses as soon as you can? I recently heard a term for this in a conversation: business ping pong. When a ball comes your way, hit it back as soon as you can. Don’t procrastinate, be the one waiting on others, not the other way around. The sooner you reply, the less time you spend thinking about it, the less clutter you keep in your house.

Bringing a Project to Life

I have always found that building something out of nothing is hard – whether it be a new feature, or a new product. It’s much easier to fix a bug or make an improvement to working code. Yet, I’ve been on some ambitious projects that got done (building a completely new API for a large ad serving company & a radar program that was designed & built from scratch in 3 years, to name the most difficult). Even though many people thought it was a miracle we got these done, I noticed some techniques that were useful in bringing these projects to fruition which apply to open source projects as well.

Before getting started, you must first give up the idea of building a finished product. You need to make something just usable to start. If you have something useable (and useful), people will start using it – and for open source projects, some of those uses will help make it better. After a project is brought to life, it can evolve into something more – a finished product. The difficultly then lies in making that first usable product.

Fail Fast

Nothing has been accomplished yet. You have to start planning and try to eliminate bad ideas before they take up too much time. The most efficient way of doing this is by getting feedback from others. If you find experts, or even just people with different perspectives, they can draw upon their experiences to quickly point out problems, and save you spending days or weeks working on a solution that will never work.

In addition to getting feedback, prototype things and make proof of concept models. Most importantly, don’t invest a lot of time here. You don’t need pretty code – but it should be accurate. The goal is to wrap your head around the problem and discover potential solutions and failures early – not to write something maintainable. The models themselves aren’t important, the data/information you learn from these experiments is what matters.

Build a pre-alpha version

You need to get something working that you can build off of. Most projects have more than 1 component. Even small web apps have a view & controller. Connecting the different parts together is difficult because it is easy to make assumptions about how & what data passed between interfaces.

On the radar system I worked on, many properties of the raw input signal were not planned to be passed along to processes further down the line. But after building the first version of the code, that format of that data quickly changed to include additional information. And after the first version of every component was done, we could look at a display, albeit a very crude one, and see a real object – that is very motivating.

Focus, Iterate & Test

The radar system I worked on was also hard because we were short on time due to an accelerated schedule – time was our most valuable resource. The main reasons we were still able to finish was because the motto of the project was an idea from Voltaire: “perfect is the enemy of good enough”. We were told not to spend any time working on something that already met the specs – we could not spare it. Instead we had to find something that was not good enough, and focus there.

This list of areas we needed to focus became our list of improvements/bugs. We made it past the difficult part of building something from nothing with our pre-alpha version, and now we were working on fixes and improvements – the easy part. After repeating the focus & fix process many times, we got to the point where we had a useable product.

Also crucial here is testing. You need to test so you can confidently make changes without causing regressions. The less effort required to run them, better (automated ;))

Share your project

Alex Martelli gave a talk at PyCon 2013, “Good enough is good enough”. He said, if you make something people need, they will use it, even if it’s not perfect. The catch here is it has to be something people find useful. If enough people use your open source project, some of them will contribute back, fix bugs, and maybe even clean up and optimize code. Even if people do not fix anything, they can file issue tickets. This last weekend we had someone file several issues with example code – perfect test cases.

Modifying Native JavaScript Objects

Random Number Generator There are still debates in JavaScript community about modifying native JavaScript objects (Array, Number, String, Object). Some developers believe it’s evil, while other encourage it. If you’ve looked at RapydScript’s stdlib, you probably already know my stance on it – I’m actually in favor of such modifications when they make sense. I might not have as much experience in JavaScript as the gurus, but I have enough experience in language design to form sane opinions about which practices are evil and which are not.

Most arguments against modifying the native objects hold no water. They give examples where the developer overrides a native method to work differently from original implementation. That is indeed evil, assuming that calling this method with the same arguments as original JavaScript implementation no longer produces the same result. For example, if I decided to rewrite String.prototype.replace such that it worked globally, like in most other languages, instead of on a first occurrence:

String.prototype.replace = function(orig, sub) {
    return this.split(orig).join(sub);
}

I could break (or make them behave differently than they should) other libraries/widgets on the same page that assume that replace() will only replace the first occurrence. I completely agree that one should never override existing methods to do something else. It’s important that the basic subset of the JavaScript language works exactly as others would expect it to. If you can’t guarantee that the foundation of your house stays level, you can’t guarantee that your house will not collapse.

There are some gray area cases, where the developer could extend the functionality of existing method, such that it still works as expected given original arguments, but does something else when more arguments are given:

Array.prototype.pop = function(index) {
    if (!arguments.length) {
        index = this.length - 1;
    }
    return this.splice(index, 1)[0];
}

In this case, myArray.pop() will still work exactly as the user would expect. However, if called with myArray.pop(0), the function will behave like myArray.shift(), or like splice() method if called with an index in the middle. The only real disadvantage here is that by overriding the native method, the developer has made the function slower than the original (native methods tend to work faster). Claiming that this is bad because a function could break another logic that mistakenly called it with an argument (which it ignored before) on the other hand is not a legitimate argument. The bug is in the function making the bad call, not in the overriding logic. This is why I like Python, it will complain about unexpected arguments right away instead of ignoring them so they could become a bigger problem later.

Appending your own methods, on the other hand, is the most benign way to modify a native object. This is when we implement something like this, for example:

Array.prototype.copy = function() {
    return this.slice();
}

When doing, so, however, it’s important to be aware if other libraries are appending a method with the same name but different functionality, then you might want to pick a different name (this is usually not a problem with libraries/APIs since you’re unlikely to need more than one library for native object manipulation). So far, the most popular argument I’ve seen against this type of native object modification is potential name collision if future version of JavaScript decides to add a method by the same name. This argument is moot. You’re not developing your app for a hypothetical language that will exist 10 years from now, and when the time comes you’ll easily be able to rename the offending function, since you know all references to it are your own. The entire EcmaScript specification is easily available, and unless you plan to drop support for all browsers older than 6 months, the JavaScript implementation you’ll be using as basis will probably lag behind that by a few months to a few years, giving you more than enough time to handle naming collisions.

You might notice, however, that if you start appending methods to the native Object object, any jQuery code running on the same page will break. The problem is not with overwriting native methods, but with poor assumptions made by jQuery developers who wrote buggy code. The problem is that jQuery developers are part of the group that believe that appending to native JavaScript objects is evil, and instead of properly testing that they’re iterating through object’s own properties, they make the assumption that no one else will append anything to Object when they scan through it using “for key in Object” type logic. To avoid the same mistake as jQuery made, make sure to iterate only through your own keys:

for (var key in obj) {
    if obj.hasOwnProperty(key) {
        ...
    }
}

Alternatively, if you only care about the latest browsers, you can use Object.getOwnPropertyNames() instead. It was a poor design on JavaScript’s part to default to iterating through every property of the object, but that can’t be changed now. jQuery developers have been confronted about their assumption before, they claim the main reason was performance. Independent tests showed about 5% performance hit from adding this check, so I don’t buy the performance claim, especially since jQuery already makes multiple performance sacrifices in the name of usability (show/hide logic having safety checks to figure out the object’s current state, for example). In my opinion, John Resig’s stance on this is no different than failing to do a “division by zero” check (in Python, because JavaScript will automatically return infinity) and then claiming that it’s for performance reasons and anyone who passes arguments that eventually result in a division by zero is the one at fault.

I haven’t checked if this is fixed in jQuery 2.0. Without support for older browsers there is no reason not to use Object.getOwnPropertyNames(). In the meantime, try not to overwrite Object (other native objects are fine) if you’re using jQuery anywhere at all (if you’re not, it’s not a problem). I should also mention that if you don’t plan to support older browsers (such as IE8), a better practice would be to use defineProperty method rather than appending to the prototype, since it will omit the method from getting picked up when iterating through the object, making the jQuery bug a non-issue.

As far as my own stance, it’s fine to append to the functionality of a native object, but never to remove. Modifying existing property to work differently than before, given the same function signature, counts as removal, and is evil as well. I have removed the dependence on overwriting native methods in RapydScript’s stdlib2, but that was mainly to clean up the standard library and make it compatible with jQuery, not because I’m against modifying native objects.

Productivity vs Performance

Productivity vs Performance When I was writing software in college, there was more emphasis on program execution speed than on time spent implementing it. In startups and most work environments, the reverse tends to be true. It took me a while to figure this out, and for the first few years of programming, I would often introduce optimizations that were not necessary, or make code uglier than it needed to be for the sake of performance. I’m not talking about premature optimization, I’m talking about poor design decisions stemming from assumption that performance trumps legibility.

I’ve spent a lot of time refactoring poorly written code in Grafpad – code that wasn’t necessarily bad to begin with, but quickly outgrew its initial purpose as more special cases were introduced to it. What gave me even more grief, however, were the special cases I imposed on myself, in an attempt to preserve bandwidth, CPU and memory usage. For example, each shape point in Grafpad consists of 3 items: x-coordinate, y-coordinate, and curvature flag. In original version of Grafpad, shapes that I knew couldn’t have curvatures omitted that curvature flag. As a further optimization, I wrote faster versions of multiple algorithms (edge detection, intersection, bounding box computations) which didn’t have to deal with curved lines. I later ended up regretting that, having realized that I did twice as much work to handle a case that may have been fast enough anyway. I wasted time I could have invested elsewhere, I introduced special case logic that didn’t need to be there.

Another example is the logic I created for transmitting data to server and back. I didn’t like that JSON.stringify included a lot of irrelevant information, which I wouldn’t need since I knew exactly what kind of object I’m sending over. My packing method transmitted only the values themselves, and I was unpacking them correctly by following the same order of operations. Once again, I performed a bunch of work that JSON.stringify could have handled for me, and ended up with a more fragile solution that depended on pack/unpack logic on both front-end and back-end to be identical.

I’m not saying the work I did was pointless, it simply wasn’t the kind of work I needed to do at an early-stage startup. By the time these kinds of optimizations become relevant, the product should already have multiple users and a team of developers who have time to do these optimizations. An early-stage startup should concentrate on getting the product out the door and fixing bugs that affect users, performance issues rarely matter at that stage. And with proper use of polymorphism, those optimizations will be easy to add later on in cases where they do matter.