What I learnt from building ugly ToDo lists in JavaScript and JQuery

Not pretty

Here’s what I learnt when building two To Do applications. One was built using pure JavaScript and the other used jQuery with a smidgen of JavaScript.

Each To Do list had to satisfy several basic functions. The user had to be able to add tasks, remove tasks, save tasks and be able to retrieve them. These To Do lists are far from production quality but what I have learnt by building them will help immeasurably in my next project and the one after that and so on.


Both examples were built using Visual Studio Code, jQuery 3.3.1 and Chrome 67. Any text editor can be used instead of Visual Studio Code but I have found it to be a joy to use and is now a firm favourite of mine.


Practical experience of using Web Storage to save and retrieve tasks.


Understanding Immediately Invoked Function Expression or IIFE. Whilst there are other reasons to use an IFFE such stopping name collisions in this example I have used an IFFE to call a function that retrieves the saved tasks once the Document Object Model (DOM) has been loaded.


Although there is only 200 lines of JavaScript I have become familiar and thankful for Chrome Devtools. Pressing F12 has become second nature when I need to find out what is actually going on with my code. From seeing errors shown in the console to adding breakpoints via sources, I have found Chome Devtools so intuitive to use.


Creating and removing input items dynamically. The To Do list needed functionality whereby a user could add and remove tasks which translated to being able to create input elements and attach them to the parent. When removing an item, the code needed to find out which task had been selected for removal before removing it.


When refreshing my knowledge of JavaScript I was having trouble getting to grips with which selector I should use when selecting elements of the DOM. Experimenting with different JavaScript selectors such as querySelector, getElementById for single element node selection and getElementsByName for selecting more than one node gave me a greater understanding of working with the DOM and the difference between the code required to handle one node compared to when a node list is returned.



Use of the jQuery ready method. All the code used by index.js is contained within the jQuery’s ready method, the shorthand version of which is shown below
$(function() { 

   all code goes between the opening and closing brackets

Doing this ensures that the DOM has tree has been constructed by the browser.


Similar to the JavaScript version I learnt how to create and remove items dynamically and attaching or detaching them from the parent as necessary.


When working with items using the id attribute requires use of the # symbol followed by the unique name.  e.g. to select the following element:
0 tasks
The following jQuery expression is used
Refactoring. As I was building this second ToDo list I could see ways to improve and do things better with less code. In addition whilst working on this version I discovered a bug with that would have also affected the JavaScript version as well so I was able to go back and fix it in both.


I really enjoyed working with jQuery and love the simplicity of selecting items and then chaining of actions to do interesting work with the selection but there were times that I thought pure JavaScript was a better fit. Within index.js you can see the function getLocalStorageAsArray() does not use jQuery.
Use of .each(). jQuery allows you to update all of the elements selected without the need for a loop. However there were times when I needed to perform some actions on each element found by a selection which meant use of the .each() method which allows you to recreate a loop in scenarios when it is required. This required a slight shift in thinking because it is too easy to think a loop is required (coming from another language) when a jQuery selection is all that is really needed.


A pleasant side effect of  working with .each() was that I needed to find out which iterator my .each() was currently processing and during my hunt for an answered discovered that the jQuery official documentation is really good and it succinctly pointed me to using using the index argument.


The number of lines of code when using jQuery versus pure JavaScript reduced the lines of code by about 50 lines (I’m in no doubt a more experienced developer could reduce both versions down even further.) I am not going to read too much into that because it is like comparing apples with oranges but for me it was an interesting observation.



I have surprised myself with how much I have learnt building these and how much fun I had doing so.


Javascript and JQuery book by Jon Duckett This is the best book I have read on JavaScript and JQuery and I have no hesitation in recommending this to you.

Hello WordPress.com

After several fruitless attempts to contact my previous web host in order to add HTTPS to this site I gave up and moved this blog to WordPress. Additionally I was getting increasingly bored with the management of a self hosted WordPress site with challenges such as deciding which versions of WordPress the hosting company would accept, which of the several plugins I used should be updated. A lot of noise confronted me each time I logged on when all I just want to put something up that I had learnt. So I have outsourced all of those distractions to WordPress.com.

The migration from self hosting wasn’t as painful as I thought and took about a day. Albeit this was going from one version of WordPress to another. The export functionality creates an XML file that contains posts, pages, comments, categories, and tags only. Importantly the export does not export images. This needs to be done manually. I used downML which downloaded all the images in a WordPress media library into one zip file.

Upon uploading the images to WordPress.com they had a different URL which meant when I imported my posts from the old site, none of the images referenced by the posts could be found. Faced with two options; amending the export script to change the URL for the images in each post or manually add the images. I chose the later simply because by the time I had developed the regex to find and then replace the URL’s, then tested and then fixed and retested and so on I could have made far quicker progress using the manual approach. If my back catalogue of posts had been greater I think I would have taken the programmatic route. Although not the most fun job in the world it is now done and I am pleased with the results. If you noticed any bugs whereby a post is missing an image please get in touch.


Some recent podcast episodes

I have listened to these episodes multiple times now and have been able to extract more value and extra insights from them on every playback.

I hope you find them enjoyable as much as I have.

Akimbo: No such thing (as writers block)

Seth Godin’s podcast is superb, this episode on writers block is my favourite.  In it he argues convincingly that there is no such thing.

Developer on Fire: Episode 196 Rob Conery

Not sure how I found Dave Raels podcast, Developer on Fire but I am glad I did. In each episode Dave interviews famous and not so famous programmers. I have enjoyed rummaging through the archive listening to interviews with Gerald Weinberg, Ward Cunningham, Dave Thomas, the list goes on.

My favourite episode features Rob Conery. In this episode they talk about Rob’s book; The Imposters Handbook (a superb book!)  but it is the discussions on the Perception of danger and Getting over yourself that really make this episode special for me.

The Hello World Podcast: Episode 12 Scott Hanselman

As well as hosting one of my favourite podcast, Hanselminutes Scott is a fantastic guest. In this episode, Scott  talks about retaining some perspective even when your project is being chased by ninjas and the common trait his mentors share.


The incremental commit Anti-pattern

This is a far more opinionated post than usual. It is not meant to inflammatory, my goal for this post is to have something tangible that I can point to the next time an Oracle Developer thinks they want or need to “do” incremental commits.

In this post an incremental commit is defined as code which commits a transaction but the cursor is kept open.  They are also known as a fetch across commit.

I encounter incremental commits in PL/SQL code that issues a commit inside a  loop such as the examples below:


CURSOR customers_cur
  SELECT columns
    FROM some_tables;
  FOR i in customers_cur
    -- doing something interesting with the row
    COMMIT; -- ARGH!

The commit may be decorated with variations of:

IF rows_processed > some_arbitary_number


IF mod(customers_cur%rowcount, v_commit_limit) = 0 

Why are they an Anti-Pattern?

They introduce side effects that the Developer is not aware of, usually a self inflicted ORA-01555 Snapshot too old exception.  I will come back to this in the final part of this post.

Why are Incremental commits used?

Over the years I have had many conversations with other Oracle Developers regarding the problems incremental commits cause. The common explanation I have heard for the introduction of incremental commits is that the developer didn’t want to “blow” the rollback segments.

I disagree with this. You should never commit inside a loop. You should commit when your transaction is complete and only then. If your rollback segments are too small to support your transactions then you need to work with your DBA and get them resized.

ORA-01555 Snapshot too old

I am going to spend the remainder of this post explaining why you will see this error when you perform an incremental commit. It will not be a deep dive into all the nuances of this exception just it’s relevance to incremental commits. The best explanation for ORA-01555 is this AskTom post which originally started some 18 years old. Much of what follows is distilled from this thread.

An ORA-01555 occurs when the database is unable to obtain a read consistent image. In order to obtain this information the database uses the rollback segment but if that information has been overwritten then the database can not use it and the ORA-01555 is raised. So what causes this information to be overwritten? In the context of incremental commits the fuse is lit when a commit is issued…

Here is the steps leading to this error taken from Oracle Support  Note:40689.1

1. Session 1 starts query at time T1 and Query Environment 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps ‘3’ and ‘4’.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block’s header that it has been changed and it is later than the required Query Environment (which was 50). Therefore we need to get an image of the block as of this Query Environment.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required Query Environment.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1’s changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

I have marked the key point – 5. By issuing a commit you are saying I have finished with this data, other transactions feel free to reuse it. Except you haven’t finished with it and when you really need it, it will have been overwritten.

Edge Cases?

I am not aware of any edge cases that require incremental commits. If you know of any please let me know via the comments.


This post would not have been possible without the help from the following sources:

AskTom question Snapshot too old

Stackoverflow Question Commit After opening cursor

NDC 2018


I try to attend a developers conference once a year and this year I attended NDC London 2018.  I was surprised that only two developers out of the many I had asked  before going knew of the NDC brand of conferences.  I discovered NDC thanks to Carl and Richard of .NET Rocks! Thanks gents, I owe you.

Just in cases you are not aware of what NDC is, here is a brief description courtesy of the NDC London site.

About NDC

Since its start-up in Oslo 2008, the Norwegian Developers Conference (NDC) quickly became one of Europe`s largest conferences for .NET & Agile development. Today NDC Conferences are 5-day events with 2 days of pre-conference workshops and 3 days of conference sessions.

NDC London

In December 2013 NDC did the first NDC London conference. The conference was a huge success and we are happy to announce that the 5th NDC London is set to happen the week of 15-19 January 2018.

I didn’t attend the pre-conference workshops so my NDC adventure started on Wednesday. First impressions of the conference and its venue, The Queen Elizabeth II Centre, Westminster were superb; upon arriving there were no queues to register another plus was that the cloak room was free which although a small touch was one I really appreciated.

Throughout the conference continuous high quality hot drinks and food was served. Starting with cakes and pastries and moving on to a variety of hot food.  I wouldn’t normally mention food at a conference but it was of a standard that I had not encountered at other conferences that I had to write a few lines about it.


My reason for attending was to hear a number of people whose podcasts I listen to, books & blogs I have read or have helped me by providing answers on Stack Overflow speak. As a newbie to this conference I did not know what to expect from the talks but I was not disappointed and for me the conference experience went into the stratosphere from here on in.

The talks are scheduled to last one hour and as you will see from the agenda they are on a wide variety of subjects. The presenters did not disappoint. There was no death by PowerPoint, no about me slides, no err/errms or the dreaded “like”.  The presenters were passionate about their topics and were clearly enjoyed themselves engaging with their audiences. Some had a slight more conversational style whilst others used self deprecation and one in particular used the medium of song (thank you Jon Skeet that was unforgettable). One of the common traits that I noticed is that many of the presenters are building and experimenting with “stuff” all the time.

As is the norm after a talk\session the audience are invited to give feedback and NDC has probably the best I have so far encountered.  As you leave the room after a talk just throw a colour in the box. Brilliant.


Here are the sessions I attended:


Keynote: What is programming anyway?

Sondheim Seurat and Software: finding art in code
Jon Skeet

You build it, you run it (why developers should also be on call)
Chris O’Dell

I’m Pwned. You’re Pwned. We’re All Pwned.
Troy Hunt

Refactoring to Immutability
Kevlin Henney

Adventures in teaching the web
Jasmine Greenaway

C# 7.1, and 7.2: The releases you didn’t know you had
Bill Wagner


Building a Raspberry Pi Kubernetes Cluster and running .NET Core
Alex Ellis & Scott Hanselman

An Opinionated Approach to ASP.NET Core
Scott Allen

Who Needs Dashboards?
Jessica White

Hack Your Career
Troy Hunt

HTTP: History & Performance
Ana Balica

Going Solo: A Blueprint for Working for Yourself
Rob Conery

NET Rocks Live with Jon Skeet and Bill Wagner – Two Nice C# People


The Modern Cloud
Scott Guthrie

Web Apps can’t really do *that*, can they?
Steve Sanderson

The Hello World Show Live with Scott Hanselman, Troy Hunt, Felienne, and Jon Skeet

Tips & Tricks with Azure
Scott Guthrie

Solving Diabetes with an Open Source Artificial Pancreas
Scott Hanselman

Why I’m Not Leaving .NET
Mark Rendle


NDC London 2018 was the best conference I have ever attended. I have returned from it motivated to do more; to experiment and try stuff that I hadn’t even thought about.

There were so many highlights for me but having my photo taken with Carl and Richard was the best. Seriously guys you rock!


Contributing to an Open Source Project

I have been interested in Git, the distributed version control software since reading the first edition of Scott Chacon’s Git book way back in 2010. However outside of my own projects, my real world experience of using Git is relatively limited and it’s one of the skills I never seem to get around to improving on.

To change this, I have recently contributed to an open source project hosted on GitHub. The change I made can be found here and this post is my recollection of the process to help me and hopefully others just getting started with the GitHub workflow.

For a comprehensive guide to the GitHub workflow, I recommend reading Chapter 6 of the Git book.

Find a project that you want to contribute to.

Probably the most tricky step – there are so many projects how do you find one to contribute to? In my case I started with a project that I know and use. OraOpenSource/Logger which is a great tool for instrumenting Oracle PL/SQL code.


From there it’s a quick scan of the open issues. I picked one related to the documentation because I wanted to focus on the GitHub workflow. The challenging technical issues and enhancements will still be there once I have got up to speed with the GitHub way of working.

Once you have found a project, it is unlikely that you will be able to push your changes to it, so the next step is to fork it. This gives you a copy of the project within your users namespace which you can then make changes to.

Make the change

With the project forked, you can go ahead, create a topic branch and make the necessary changes to the files and once you are happy with them, push them back to your copy of the project.

Pull request and …..Oops!

When you are ready to contribute your changes back to the original project you need to create a pull request. Creating a pull request opens up a discussion thread with a code review focusing on your proposed change.

Don’t worry if the change is discussed or rejected. Dust yourself down and go again. I had my own oops moment with my first pull request as I had changed a URL from relative to absolute. Not a problem so I closed the initial pull request and created another which has now been accepted and merged into the project.

Make the world a better place

Apologies the for the heading, I have been enjoying Silicon Valley around the time this post was taking shape.  If not the world, your change no matter how small will make the project you are contributing to better and it gives you a public artefact that you can point to.


In this article I had written about my Git experience along with my first contribution to an open source project.

Structured Basis Testing

Re-reading Code Complete 2 recently, I came across the concept of Structured Basis Testing which despite its foreboding name is in fact an easy to understand and straight forward method that you can use to calculate the minimum number of unit test required to exercise every path through a function or procedure.

Structured Basis testing differs from Code Coverage testing or Logic Coverage because Structured Basis testing gives you the minimum number of test cases you need to exercise every path whereas when using Code Coverage testing or Logic Coverage testing you could end up with many more test cases than you would need to cover the same logic with Structured Basis Testing.

You calculate the minimum number of tests required by following these rules:

  1. Start with 1 for the straight path through the function or procedure
  2. Add 1 for each keyword or decision construct such as if, while, for, do, and, or,
  3. Add 1 for each case in a case statement.

Some Examples

Example 1


In this first example, there are 2 unit tests required to exercise every path through the function. Number 1 is the straight path through the function, i.e a record is found by the select statement. Number 2 is the path taken when a record is not found.

Example 2


In the second example there are 5 unit tests required to test every path through this procedure. Number 1 is again the straight path through the procedure. Numbers 2 – 5 are each of the cases in the case statement.

Example 3


In this  final example, there are four unit tests required to test every path through this procedure. The first is the straight path through the routine, the bulk collect will not raise an exception if no records are found so there are no divergent paths at this point, the next unit test, number 2 exercises the for loop and then numbers 3 and 4 exercise the IF statement paths.


In this article I have explained what Structured Basis Testing is, how it differs from other code coverage testing and how to calculate the minimal number of unit tests required to exercise every path in your function or procedure.


The idea for this post  comes from the discussion on Structured Basis Testing in Code Complete 2 Pages 505 – 506