20180712_01.png

Thanks to Scott Hanselman’s tweet. I went off and had an enjoyable couple of hours building Jerrie Pelser’s Airport Explorer and learnt a lot more than I was expecting.

20180712_02.JPG

Firstly the app is a single page ASP.NET Razor Page application, something I didn’t even know existed but now will be my first choice to use when developing an upcoming ToDo list requirement. I used the Mapbox and Google places API’s for the first time.

The Google API was particularly interesting because it has moved on since Jerrie’s book which meant that one particular line of code would not compile:

  if (photosResponse.PhotoBuffer != null)
            {
                airportDetail.Photo = Convert.ToBase64String(photosResponse.PhotoBuffer);
                airportDetail.PhotoCredit = photoCredit;
            }

The Google API I was using was version 3.6.1 which meant that the project would not compile with the reference to PhotoBuffer (shown in bold above) Reviewing Jerrie’s completed example on GitHub and I saw that the Google API used was version 3.2.10. So I learnt to install a particular version using the Package Manager Console

install-package googleapi -version 3.2.10

In addition my debugging skills were exercised when bad data kept my page from loading. It took a little while but I traced the problem to this line of code

 using (var reader = new CsvReader(sr, configuration))

and I had forgotten to add the entry in bold which handled the bad data!

I used GitKraken for the first time to manage my Git interactions and learnt how to undo a commit after remembering that I had left my API credentials in the source code!

My version of the Airport Explorer can be found here.

After several fruitless attempts to contact my previous web host in order to add HTTPS to this site I gave up and moved this blog to WordPress. Additionally I was getting increasingly bored with the management of a self hosted WordPress site with challenges such as deciding which versions of WordPress the hosting company would accept, which of the several plugins I used should be updated. A lot of noise confronted me each time I logged on when all I just want to put something up that I had learnt. So I have outsourced all of those distractions to WordPress.com.

The migration from self hosting wasn’t as painful as I thought and took about a day. Albeit this was going from one version of WordPress to another. The export functionality creates an XML file that contains posts, pages, comments, categories, and tags only. Importantly the export does not export images. This needs to be done manually. I used downML which downloaded all the images in a WordPress media library into one zip file.

Upon uploading the images to WordPress.com they had a different URL which meant when I imported my posts from the old site, none of the images referenced by the posts could be found. Faced with two options; amending the export script to change the URL for the images in each post or manually add the images. I chose the later simply because by the time I had developed the regex to find and then replace the URL’s, then tested and then fixed and retested and so on I could have made far quicker progress using the manual approach. If my back catalogue of posts had been greater I think I would have taken the programmatic route. Although not the most fun job in the world it is now done and I am pleased with the results. If you noticed any bugs whereby a post is missing an image please get in touch.

 

Lighthouse is a useful tool that is unfortunately tucked away within depths of the Chrome browser devtools. It performs an audit of a URL that you supply and provides a report based on scoring across five categories; Performance, Progressive Web App (PWA), Accessibility, Best Practices and Search Engine Optimisation (SEO).

The easiest method of taking it for a spin is to start Chrome, go to a page of interest, press F12 then the >> button and choose Audits

20180618_1.png

which that will bring up the Lighthouse home page.

20180618_2.png

Pressing “Perform an audit” will present you with options to include or exclude one of the five categories.

20180618_3.png

You invoke Lighthouse by selecting “Run audit”.  After an interval the report will be displayed. The Lighthouse report for google.co.uk is shown below.

20180618_4.png

The top bar gives information about the URL used and the emulation options used. Immediately underneath are the five categories with a score between 0 and 100, where 0 is the lowest possible score. This page explains the Lighthouse v3 Scoring Guide.

Selecting a category will take you to the relevant section of the report.

20180618_5.png

It is also possible to invoke Lighthouse using the Node command line tool. First install it using the Node Package Manager. I have used the -g option to install it as a global module.

npm install -g lighthouse

Once installed you can then use a command such as the following to run Lighthouse. The view option automatically opens Chrome and displays the HTML report.

lighthouse https://www.google.co.uk --view

If you prefer the output in JSON the following will create a file in the directory the command was run from

lighthouse https://www.google.co.uk --output=json --output-path=./google.json

Acknowledgements

Lighthouse home page

The Building Web Applications for the next Billion Users with Ire Aderinokun episode of Hanselminutes first made me aware of Lighthouse.

 

If your JavaScript  executes but not how you expect it to, one method to find out what is going on is to add code to the script that outputs information to the console as the script executes.

Vanilla Logging

The method that most JavaScript developers default to is to use console.log similar to the screen shot below.

20180608_1

Opening the Chrome DevTools (F12) displays the output for this script as:

20180608_2

It doesn’t take many messages to appear before I reach information overload so I use this method sparingly. To assist in such situations the JavaScript Console object has other methods that can be used to help make categories of logging easy to see.

Console.info(), Console.warn() and Console.error()

In order to differentiate between different classes of logging, you can use the .info, .warn and .error methods of the Console object.

Here is the function updated to use these methods, note that the call to each of these are the same as console.log

20180608_3

And the output from the Chrome DevTools console is:

20180608_4-

This output comes from Chrome 67 and note that the first two messages which although use .info() the output is identical to .log(). Unfortunately since Chrome 58 info() and log() are shown identical in the console window. (If you would like to know more this link is a good place to start. The calls to warn and error are shown in red and yellow respectively.

Logging Groups

When reviewing the output from a related set of instructions such as the output from a function, it may help to keep the related information together.  Console.group can be used to help in these situations.
20180608_5-

The console output from this function shows, first collapsed:

20180608_6-

and then expanded:

20180608_7-

Table

Another aid to improve readability, particularly when dealing with a lot of tabular data is to format it into a table using console.table. In example shown, an object is populated with data and then a call is made to console.table passing it the columboEpisodes object.

20180608_8-

The output from the Chrome DevTools console is:

20180608_9-

Assert

The very best method of aiding readability is to only have a message appear in the console if it breaks the expected condition at that point in the script. Console.assert can be used in such circumstances. If the assertion returns true nothing is shown. This can help reduce the noise of what is shown in the console window.

20180608_10

The output of the script is shown below, note that only one message is shown, the one returns false which is the second one, X is not greater than Y.

20180608_11

Acknowledgements

Javascript & JQUERY Book by Jon Duckett

The MSDN pages on the Console Object

I have listened to these episodes multiple times now and have been able to extract more value and extra insights from them on every playback.

I hope you find them enjoyable as much as I have.

Akimbo: No such thing (as writers block)

Seth Godin’s podcast is superb, this episode on writers block is my favourite.  In it he argues convincingly that there is no such thing.

Developer on Fire: Episode 196 Rob Conery

Not sure how I found Dave Raels podcast, Developer on Fire but I am glad I did. In each episode Dave interviews famous and not so famous programmers. I have enjoyed rummaging through the archive listening to interviews with Gerald Weinberg, Ward Cunningham, Dave Thomas, the list goes on.

My favourite episode features Rob Conery. In this episode they talk about Rob’s book; The Imposters Handbook (a superb book!)  but it is the discussions on the Perception of danger and Getting over yourself that really make this episode special for me.

The Hello World Podcast: Episode 12 Scott Hanselman

As well as hosting one of my favourite podcast, Hanselminutes Scott is a fantastic guest. In this episode, Scott  talks about retaining some perspective even when your project is being chased by ninjas and the common trait his mentors share.

 

I have been working through the NG2 book on Angular 5 and became stuck when trying to get an example on Dependency Injection to work.

Here is part of the problematic code I was unable to run:

ngDI-Copy

When running the sample, the page did not load and part of the console message reported by Chrome is shown below:

ngDI3-Copy

This problem was occurring using the following Angular environment:

ngDI2

Starting with the user-demo-component.ts file and after commenting out various parts of the code within this file I tracked the problem to the constructor:


constructor(private userService: UserService) {
}

Fortunately StackOverflow to the rescue and this answer provided the fix.  I made the following change to user-demo-component.ts whereby I added providers to the @Component decorator

...
  providers: [UserService]
...

The @Component decorator should now look like this:

ngDI4-After

Why does the completed example work without this fix?

Good question! If you have read any of NG2 book you will know that all the code examples in the book come with the source ready for you to try out rather than typing. When I checked the example and ran them I couldn’t see a similar change yet the code worked as expected so I will need to continue digging to ascertain why the supplied version works.

I will update the post once I have found out.

Update 23/04/2018

The completed example uses a different version of Angular and typescript. Here is the output from ng –version

ngDI5-After-Copy

The completed example which works uses Angular 5.2.0 whereas the version of Angular I am using is 5.2.9.

This is a far more opinionated post than usual. It is not meant to inflammatory, my goal for this post is to have something tangible that I can point to the next time an Oracle Developer thinks they want or need to “do” incremental commits.

In this post an incremental commit is defined as code which commits a transaction but the cursor is kept open.  They are also known as a fetch across commit.

I encounter incremental commits in PL/SQL code that issues a commit inside a  loop such as the examples below:

Example

...
CURSOR customers_cur
IS
  SELECT columns
    FROM some_tables;
BEGIN
  FOR i in customers_cur
  LOOP
    -- doing something interesting with the row
    COMMIT; -- ARGH!
  END LOOP;
END;

The commit may be decorated with variations of:

IF rows_processed > some_arbitary_number
THEN 
  COMMIT;

Or

IF mod(customers_cur%rowcount, v_commit_limit) = 0 
THEN 
  COMMIT;

Why are they an Anti-Pattern?

They introduce side effects that the Developer is not aware of, usually a self inflicted ORA-01555 Snapshot too old exception.  I will come back to this in the final part of this post.

Why are Incremental commits used?

Over the years I have had many conversations with other Oracle Developers regarding the problems incremental commits cause. The common explanation I have heard for the introduction of incremental commits is that the developer didn’t want to “blow” the rollback segments.

I disagree with this. You should never commit inside a loop. You should commit when your transaction is complete and only then. If your rollback segments are too small to support your transactions then you need to work with your DBA and get them resized.

ORA-01555 Snapshot too old

I am going to spend the remainder of this post explaining why you will see this error when you perform an incremental commit. It will not be a deep dive into all the nuances of this exception just it’s relevance to incremental commits. The best explanation for ORA-01555 is this AskTom post which originally started some 18 years old. Much of what follows is distilled from this thread.

An ORA-01555 occurs when the database is unable to obtain a read consistent image. In order to obtain this information the database uses the rollback segment but if that information has been overwritten then the database can not use it and the ORA-01555 is raised. So what causes this information to be overwritten? In the context of incremental commits the fuse is lit when a commit is issued…

Here is the steps leading to this error taken from Oracle Support  Note:40689.1

1. Session 1 starts query at time T1 and Query Environment 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps ‘3’ and ‘4’.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block’s header that it has been changed and it is later than the required Query Environment (which was 50). Therefore we need to get an image of the block as of this Query Environment.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required Query Environment.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1’s changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

I have marked the key point – 5. By issuing a commit you are saying I have finished with this data, other transactions feel free to reuse it. Except you haven’t finished with it and when you really need it, it will have been overwritten.

Edge Cases?

I am not aware of any edge cases that require incremental commits. If you know of any please let me know via the comments.

Acknowledgements:

This post would not have been possible without the help from the following sources:

AskTom question Snapshot too old

Stackoverflow Question Commit After opening cursor