Design Patterns

Summary of Design Patterns

Below, I've copied his table and added my notes to help me remember each of the design 

Creational   Based on the concept of creating an object.
Factory Method This creates an instance. In some cases, in can provide an instance of one of several related classes. In angular, the factory gives you back an instance (the author of the factory method should instantiate it for you) whereas a service returns the class and you must instantiate it yourself. (Factory, vs. Factory Method, vs. Abstract Factory: Stackoverflow - Factory method is where you're overriding the method using a subclass.

Abstract Factory Creates an instance of several families of classes without detailing concrete classes. Create a whole family of objects that are related. Abstract Factory allows you to instantiate a set (a family of classes)
Builder Use instead of factory when there's many parameters. You now only pass in the required members to the constructors and then invoke the rest through chained methods that return this ( Separates object construction from its representation, always creates the same type of object.
Prototype A fully initialized instance used for copying or cloning. Giving properties through prototypal inheritance
Singleton A class with only a single instance with global access points.

 Structural   Based on the idea of building blocks of objects.

      Adapter Match interfaces of different classes therefore classes can work together despite incompatible interfaces. create universal interface
      Bridge Separates an object's interface from its implementation so the two can vary independently.
      Composite The Composite Pattern describes a group of objects that can be treated in the same way a single instance of an object may be.
A structure of simple and composite objects which makes the total object more than just the sum of its parts.
      Decorator Dynamically add alternate processing to objects.
      Facade A single class that hides the complexity of an entire subsystem.
      Flyweight A fine-grained instance used for efficient sharing of information that is contained elsewhere.
      Proxy A place holder object representing the true object.

Behavioral   Based on the way objects play and work together.
      Interpreter A way to include language elements in an application to match the grammar of the intended language.
Creates the shell of an algorithm in a method, then defer the exact steps to a subclass.
      Chain of 
A way of passing a request between a chain of objects to find the object that can handle the request.
      Command Encapsulate a command request as an object to enable, logging and/or queuing of requests, and provides error-handling for unhandled requests.
      Iterator Sequentially access the elements of a collection without knowing the inner workings of the collection.
      Mediator Defines simplified communication between classes to prevent a group of classes from referring explicitly to each other.
      Memento Capture an object's internal state to be able to restore it later.
      Observer A way of notifying change to a number of classes to ensure consistency between the classes.
      State Alter an object's behavior when its state changes.
      Strategy Encapsulates an algorithm inside a class separating the selection from the implementation.
      Visitor Adds a new operation to a class without changing the class.

Creational Design Patterns (creating objects)

Creational design patterns focus on handling object creation mechanisms where objects are created in a manner suitable for the situation we're working in. The basic approach to object creation might otherwise lead to added complexity in a project whilst these patterns aim to solve this problem by controlling the creation process.

Some of the patterns that fall under this category are: Constructor, Factory, Abstract, Prototype, Singleton and Builder.

Structural Design Patterns (composing objects together)

Structural patterns are concerned with object composition and typically identify simple ways to realize relationships between different objects. They help ensure that when one part of a system changes, the entire structure of the system doesn't need to do the same. They also assist in recasting parts of the system which don't fit a particular purpose into those that do.

Patterns that fall under this category include: Decorator, Facade, Flyweight, Adapter and Proxy.

Behavioral Design Patterns (communicating between objects)

Behavioral patterns focus on improving or streamlining the communication between disparate objects in a system.

Some behavioral patterns include: Iterator, Mediator, Observer and Visitor.

  1. Lazy loading is a design pattern commonly used in computer programming to defer initialization of an object until the point at which it is needed. It can contribute to efficiency in the program's operation if properly and appropriately used. The opposite of lazy loading is eagerloading.

Make for Javascript?

Few interesting articles on people using Make / makefile for javascript:

And on a related note, this is someone who advocates using npm scripts instead of grunt or gulp:

Who broke the build?

From the 2013 Google Testing conference (GTAC), two Googlers discussed how they are building a system to figure out who broke the build.

I highly recommend you watch the talk as it's only 15 minutes long and has some practical implications for anyone who uses CI. 

To summarize some of their ideas:

  • What you do when a build breaks depends on what kind of test breaks the build.
    • If it's a unit test, you are in luck because you can run them within minutes for each of the changes.*
    • If it's a "medium" test (e.g. running 8 minutes or less), you can use a binary search approach and then recurse through to eventually find the build that broke it
    • If it's a "large" test, then you are out of luck because some of these tests can take hours to run and it's infeasible to run them over and over again. This is when an engineer has to manually investigate the changes and figure out who broke the build.
  • The solution... is to use heuristics. Essentially rule of thumbs that work most of the time. They basically score each CL and whoever has the highest score is most "suspected" of having broken the build. The neat part is that they actually show data of how accurate their system was in ranking the actual change that broke the build and it was pretty darn accurate (I think around the top ~1 percentile in most cases) which means that it helped Googlers not look at 99% of the changes when manually identifying who broke the build.
  • The two heuristic patterns that they implemented, although they mentioned there are potentially others:
    • Looking at the "amount" of changes. This is pretty straightforward. If there's many more changes, there's more potential to introduce regression. It's a simple heuristic but it seems to be effective.
    • Looking at the dependency tree. The closer a change was to the core, the less likely they suspected it of breaking for two reasons: 1) people who worked on core libraries that were depended on throughout Google were more likely to be careful and had stricter code review processes and 2) if a key dependency was broken, it was highly likely it would be discovered by another team at Google since it was so widely depended upon.

*Sidenote: In their talk they use the term CL (changelist) which seems to be a concept from their subversion SCM system. In Git, this seems somewhat similar to the idea of a commit (basically a set of changes), however I think it's different because in an IDE like IntelliJ you can actually make your changelist without doing a commit.

Feedback Driven Development

I give credit for this idea to my co-worker Ash Etemadieh who first told me about this concept and is something that I've been thinking about lately.

As the name suggests, Feedback Driven Development (FDD for short) is a play on the popular concept of Test Driven Development (TDD). Instead of always relying on writing tests firsts, FDD is a broader approach that says you as the developer should use whichever method will give you feedback the fastest. This idea that we should go beyond strictly adhering TDD has been brewing for some time, particularly by David Heinemeier Hansson (DHH), the creator of Rails.*

* For those interested, DHH had a very thorough conversation with Kent Beck (the originator of TDD) and Martin Fowler around this idea of "Is TDD dead?" While Kent and Martin view the benefits of TDD easily outweighing its cost, the three of them agreed a lot more on the general ideas of automated testing that the title would suggest.

Vojta Jina, the creator of Karma, the popular Javascript test runner, also said that he uses TDD when he's creating something like a command-line app because unit tests provides the fastest form of feedback, however if he's making something that's highly visual like a user interface he might just refresh the browser and look as the fastest form of feedback.

In short, while it may seem really simple, sometimes just loading the web application after incremental changes is the best way of getting feedback.

Below, I've listed a few practical ways that you can practice FDD.

  • Unit testing - Because there are so many articles on TDD today, I won't delve into too much details but to briefly recap the steps of TDD: 1) write a failing test, 2) write just enough application code to pass the failing test, 3) refactor the code to keep the same functionality but improve long-term maintainability and repeat. Fowler & co have called this the red-green refactoring cycle
  • Running the web application - This seems like the most naive approach but is sometimes the best approach as you are actually examining the application as the user. The short-term downside of this approach is that it may take each time to manually do a scenario. For example, if you have to login and then click on several buttons before you can get to the actual screen / interface you are developing with, this might be an inefficient method. To remedy this, I've come up with a few sub-ideas below. The long-term downside of this approach is that it's easy to get lazy and neglect unit testing. The problem is that in order to create a maintainable large-scale code base, you have to have a comprehensive automated test suite (ideally at the unit, functional, and e2e level).
    • Respect the URL - If every distinctive screen on your application has a unique URL, it is much easier than to just refresh the page rather than having to hop through multiple boilerplate screens just to get to the one you're interested in. This is based on how you do your routing.
    • Automatically refresh the browser - Using a tool like BrowserSync automagically refreshes the browser page for you whenever you change the source code.
  • Static analysis - Javascript in the last few years has seen static analysis tools become increasingly popular. The involvement and assistance provided by these tools run the gamut:
    • Language / syntax-based tools - Microsoft's Typescript and Facebook's Flow open source projects are the most popular and extensive static analysis tools available for Javascript right now. Typescript itself is a language that combines ideas from C# and implements them as a language that is a superset of javascript. Flow is pitched as a tool but it has its own syntax (which seem similar to Typescript) and these two are essentially competing in the same space. The real benefit of these tools is that it provides feedbackbefore runtime which should save you time. In practice, I've seen and read that they do emit quite a bit of errors so there is some amount of investment required by the developer.
    • IDE - Using an IDE like WebStorm or a text editor with intellisense like Microsoft's new Visual Studio Code will give you hints about code that don't seem correct. Whether it's a missing comma or a misspelled variable name, I find this to be some of the fastest feedback.
    • Linting - This idea was popularized by Douglas Crockford with his tool JSLint that checked for commonly-made mistakes in Javascript. Especially in the early days when Javascript wasn't taken seriously as a language by many people, this tool was very valuable. As the Javascript community has evolved, JSHint began to get traction as a community-based linting tool which is more flexible than JSLint. Taking it even further is ESLint which is probably the most powerful linting tool for Javascript as of right now and is designed to be easily "pluggable" so you can dd your own linting rules. People have created some powerful linting rules, such as best practices for the Angular framework.
  • Automated testing cloud - Sometimes you really want to get feedback on a page for multiple devices / languages / etc. This is especially relevant for those creating user interfaces that have to work for desktop web and mobile web. Using an automated testing service provider such as Sauce Labs may help you improve your workflow rather than manually opening up the page in Chrome, Firefox, IE, and Safari.
  • Visual diffing - Along the lines of automated testing, visual diffing is a particular technique where you do take screenshots of the key interfaces of an application and then compare them with a basline (basically a set of screenshots that you have manually approved as correct). Huxley is an interesting visual diffing tool however it's no longer maintained by Facebook.