Lighthouse is a useful tool that is unfortunately tucked away within depths of the Chrome browser devtools. It performs an audit of a URL that you supply and provides a report based on scoring across five categories; Performance, Progressive Web App (PWA), Accessibility, Best Practices and Search Engine Optimisation (SEO).

The easiest method of taking it for a spin is to start Chrome, go to a page of interest, press F12 then the >> button and choose Audits

20180618_1.png

which that will bring up the Lighthouse home page.

20180618_2.png

Pressing “Perform an audit” will present you with options to include or exclude one of the five categories.

20180618_3.png

You invoke Lighthouse by selecting “Run audit”.  After an interval the report will be displayed. The Lighthouse report for google.co.uk is shown below.

20180618_4.png

The top bar gives information about the URL used and the emulation options used. Immediately underneath are the five categories with a score between 0 and 100, where 0 is the lowest possible score. This page explains the Lighthouse v3 Scoring Guide.

Selecting a category will take you to the relevant section of the report.

20180618_5.png

It is also possible to invoke Lighthouse using the Node command line tool. First install it using the Node Package Manager. I have used the -g option to install it as a global module.

npm install -g lighthouse

Once installed you can then use a command such as the following to run Lighthouse. The view option automatically opens Chrome and displays the HTML report.

lighthouse https://www.google.co.uk --view

If you prefer the output in JSON the following will create a file in the directory the command was run from

lighthouse https://www.google.co.uk --output=json --output-path=./google.json

Acknowledgements

Lighthouse home page

The Building Web Applications for the next Billion Users with Ire Aderinokun episode of Hanselminutes first made me aware of Lighthouse.

 

If your JavaScript  executes but not how you expect it to, one method to find out what is going on is to add code to the script that outputs information to the console as the script executes.

Vanilla Logging

The method that most JavaScript developers default to is to use console.log similar to the screen shot below.

20180608_1

Opening the Chrome DevTools (F12) displays the output for this script as:

20180608_2

It doesn’t take many messages to appear before I reach information overload so I use this method sparingly. To assist in such situations the JavaScript Console object has other methods that can be used to help make categories of logging easy to see.

Console.info(), Console.warn() and Console.error()

In order to differentiate between different classes of logging, you can use the .info, .warn and .error methods of the Console object.

Here is the function updated to use these methods, note that the call to each of these are the same as console.log

20180608_3

And the output from the Chrome DevTools console is:

20180608_4-

This output comes from Chrome 67 and note that the first two messages which although use .info() the output is identical to .log(). Unfortunately since Chrome 58 info() and log() are shown identical in the console window. (If you would like to know more this link is a good place to start. The calls to warn and error are shown in red and yellow respectively.

Logging Groups

When reviewing the output from a related set of instructions such as the output from a function, it may help to keep the related information together.  Console.group can be used to help in these situations.
20180608_5-

The console output from this function shows, first collapsed:

20180608_6-

and then expanded:

20180608_7-

Table

Another aid to improve readability, particularly when dealing with a lot of tabular data is to format it into a table using console.table. In example shown, an object is populated with data and then a call is made to console.table passing it the columboEpisodes object.

20180608_8-

The output from the Chrome DevTools console is:

20180608_9-

Assert

The very best method of aiding readability is to only have a message appear in the console if it breaks the expected condition at that point in the script. Console.assert can be used in such circumstances. If the assertion returns true nothing is shown. This can help reduce the noise of what is shown in the console window.

20180608_10

The output of the script is shown below, note that only one message is shown, the one returns false which is the second one, X is not greater than Y.

20180608_11

Acknowledgements

Javascript & JQUERY Book by Jon Duckett

The MSDN pages on the Console Object

I have listened to these episodes multiple times now and have been able to extract more value and extra insights from them on every playback.

I hope you find them enjoyable as much as I have.

Akimbo: No such thing (as writers block)

Seth Godin’s podcast is superb, this episode on writers block is my favourite.  In it he argues convincingly that there is no such thing.

Developer on Fire: Episode 196 Rob Conery

Not sure how I found Dave Raels podcast, Developer on Fire but I am glad I did. In each episode Dave interviews famous and not so famous programmers. I have enjoyed rummaging through the archive listening to interviews with Gerald Weinberg, Ward Cunningham, Dave Thomas, the list goes on.

My favourite episode features Rob Conery. In this episode they talk about Rob’s book; The Imposters Handbook (a superb book!)  but it is the discussions on the Perception of danger and Getting over yourself that really make this episode special for me.

The Hello World Podcast: Episode 12 Scott Hanselman

As well as hosting one of my favourite podcast, Hanselminutes Scott is a fantastic guest. In this episode, Scott  talks about retaining some perspective even when your project is being chased by ninjas and the common trait his mentors share.

 

I have been working through the NG2 book on Angular 5 and became stuck when trying to get an example on Dependency Injection to work.

Here is part of the problematic code I was unable to run:

ngDI-Copy

When running the sample, the page did not load and part of the console message reported by Chrome is shown below:

ngDI3-Copy

This problem was occurring using the following Angular environment:

ngDI2

Starting with the user-demo-component.ts file and after commenting out various parts of the code within this file I tracked the problem to the constructor:


constructor(private userService: UserService) {
}

Fortunately StackOverflow to the rescue and this answer provided the fix.  I made the following change to user-demo-component.ts whereby I added providers to the @Component decorator

...
  providers: [UserService]
...

The @Component decorator should now look like this:

ngDI4-After

Why does the completed example work without this fix?

Good question! If you have read any of NG2 book you will know that all the code examples in the book come with the source ready for you to try out rather than typing. When I checked the example and ran them I couldn’t see a similar change yet the code worked as expected so I will need to continue digging to ascertain why the supplied version works.

I will update the post once I have found out.

Update 23/04/2018

The completed example uses a different version of Angular and typescript. Here is the output from ng –version

ngDI5-After-Copy

The completed example which works uses Angular 5.2.0 whereas the version of Angular I am using is 5.2.9.

This is a far more opinionated post than usual. It is not meant to inflammatory, my goal for this post is to have something tangible that I can point to the next time an Oracle Developer thinks they want or need to “do” incremental commits.

In this post an incremental commit is defined as code which commits a transaction but the cursor is kept open.  They are also known as a fetch across commit.

I encounter incremental commits in PL/SQL code that issues a commit inside a  loop such as the examples below:

Example

...
CURSOR customers_cur
IS
  SELECT columns
    FROM some_tables;
BEGIN
  FOR i in customers_cur
  LOOP
    -- doing something interesting with the row
    COMMIT; -- ARGH!
  END LOOP;
END;

The commit may be decorated with variations of:

IF rows_processed > some_arbitary_number
THEN 
  COMMIT;

Or

IF mod(customers_cur%rowcount, v_commit_limit) = 0 
THEN 
  COMMIT;

Why are they an Anti-Pattern?

They introduce side effects that the Developer is not aware of, usually a self inflicted ORA-01555 Snapshot too old exception.  I will come back to this in the final part of this post.

Why are Incremental commits used?

Over the years I have had many conversations with other Oracle Developers regarding the problems incremental commits cause. The common explanation I have heard for the introduction of incremental commits is that the developer didn’t want to “blow” the rollback segments.

I disagree with this. You should never commit inside a loop. You should commit when your transaction is complete and only then. If your rollback segments are too small to support your transactions then you need to work with your DBA and get them resized.

ORA-01555 Snapshot too old

I am going to spend the remainder of this post explaining why you will see this error when you perform an incremental commit. It will not be a deep dive into all the nuances of this exception just it’s relevance to incremental commits. The best explanation for ORA-01555 is this AskTom post which originally started some 18 years old. Much of what follows is distilled from this thread.

An ORA-01555 occurs when the database is unable to obtain a read consistent image. In order to obtain this information the database uses the rollback segment but if that information has been overwritten then the database can not use it and the ORA-01555 is raised. So what causes this information to be overwritten? In the context of incremental commits the fuse is lit when a commit is issued…

Here is the steps leading to this error taken from Oracle Support  Note:40689.1

1. Session 1 starts query at time T1 and Query Environment 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps ‘3’ and ‘4’.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block’s header that it has been changed and it is later than the required Query Environment (which was 50). Therefore we need to get an image of the block as of this Query Environment.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required Query Environment.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1’s changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

I have marked the key point – 5. By issuing a commit you are saying I have finished with this data, other transactions feel free to reuse it. Except you haven’t finished with it and when you really need it, it will have been overwritten.

Edge Cases?

I am not aware of any edge cases that require incremental commits. If you know of any please let me know via the comments.

Acknowledgements:

This post would not have been possible without the help from the following sources:

AskTom question Snapshot too old

Stackoverflow Question Commit After opening cursor

This post follows on from part 1.  With the AWS S3 objects in place it is now time to create a simple C# console application that will upload a text file stored locally to the AWS S3 bucket.

The first step is to create a test file that you want to upload. In my example, I have created a text file in the Downloads folder called TheFile.txt which contains some text. After creating the text file, note the name of the file and its location.

Start Visual Studio and create a new console application

AWS-dotnet1

Use NuGet to add the AWSSDK.S3 package. At the time of writing this was at version 3.3.16.2

aws21-Copy

Add the following to App.config

 
   
   
   
 

You will find the values for the access key and secret key in the accessKeys.csv which you downloaded in part one of the tutorial.

Create a new class called S3Uploader and paste the following code ensuring you change the variables for bucketName, keyName and filePath as appropriate. As you can see from the comments, this code is based on this answer from Stack Overflow.

For the sake of brevity the code deliberately does not have any exception handling nor unit tests as I wanted this example to focus purely on the AWS API without any other distractions.


using Amazon.S3;
using Amazon.S3.Model;

namespace S3FileUploaderGeekOut
{

  ///

  /// Based upon https://stackoverflow.com/a/41382560/55640
  /// 

  public class S3Uploader
  {
    private string bucketName = "myimportantfiles";
    private string keyName = "TheFile.txt";
    private string filePath = @"C:UsersIanDownloadsTheFile.txt";

    public void UploadFile()
    {
      var client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest2);

      PutObjectRequest putRequest = new PutObjectRequest
      {
        BucketName = bucketName,
        Key = keyName,
        FilePath = filePath,
        ContentType = "text/plain"
      };

      PutObjectResponse response = client.PutObject(putRequest);
    }
  }
}

In the Program.cs class add the following:


namespace S3FileUploaderGeekOut
{
  class Program
  {
    static void Main(string[] args)
    {
      S3Uploader s3 = new S3Uploader();

      s3.UploadFile();
    }
  }
}

Run the program and once it completes, navigate to your S3  Bucket via the AWS console and you will be able to see that your file has been successfully uploaded.

Summary

In this and the previous post I have demonstrated the steps required to upload a text file from a simple C# console application to a AWS bucket.

In this, the first of a two part post, I will show you how to upload a file to the Amazon Web Services (AWS) Simple Storage Service (S3 ) using a C# console application.

The goal of this post is to get a very simple example up and running with the minimum of friction. It not a deep dive into AWS S3 but a starting point which you can take in a direction of your choosing.

This post will focus on how to set up and secure your AWS S3 bucket.  Whilst the next will concentrate on the C# console app that will upload the file.

Dependencies

In order to build the demo the following items were used:

An AWS account. (I used the  12 months free tier)

Visual Studio 2017 Community Edition 

AWS Toolkit for Visual Studio 2017

Creating a new AWS S3 bucket

Log on to your AWS Management Console and select S3 (which can be found by using the search bar or looking under the Storage subheading)

aws1

You should now be on the Amazon S3 page as shown below.aws2

This page give you the headline features about your existing buckets. In the screenshot you can see an existing bucket along with various attributes.

Click the blue Create bucket button and enter a name for your bucket, the region where you wish to store your files and then click next.

aws3

Click Next.  This screen allows you to set various bucket properties. For this demo, I will not be setting any so click Next to move onto step 3

aws4

Leave the default permissions as they are and click Next to move on to the final page.

aws5-2

After reviewing the summary, click Create Bucketaws6

IAM User, Group and Policy

In order to access the S3 bucket from the .NET  application valid AWS credentials are required. Whilst you could use the AWS account holders credentials, Amazon recommends creating an IAM user in order to utilise the IAM users credentials when invoking the AWS API.

In this section of the post I will show you how to create a new IAM user and give it just enough privileges required to interact with our new S3 bucket. The information shown below has been distilled from the AWS documentation.

There are a large number of steps that follow and it is easy to get lost. My advice is to read through once before diving in. If you get stuck (or I have missed something) let me know in the comments.

Return to the AWS Home screen

aws1

Search for IAM, and after selecting users on the left hand side menu, click the blue Add User Button which will bring up the Set user details page.

aws7

Give the user a name and the access type to Programmatic access only. There is no need for this user to be given access to the AWS console.  Click Next Permissions.

Rather than give permissions directly to the IAM user, Amazon recommends  that the user be placed in a group and manage permissions through policies that are attached to those groups. So lets do that now.

From the Set permissions page click on Create Group.

aws8

Give your Group a meaningful name.

aws16

The next step is to attach one or more policies to the group.  Policies in this context defines the permissions for the group. The Create group page lists the available policies but unfortunately there isn’t an existing policy that can be used to ensure that the IAM user has only access to the new S3 bucket, so click on the Create policy button.

This opens in a new browser tab, the Create policy page

aws17

Click on the JSON tab and copy the following. Changing the bucket name as appropriate.  (The source of this JSON can be found here.)

{
  "Version": "2012-10-17",
  "Statement": [
  {
    "Effect": "Allow",
    "Action": [
    "s3:ListAllMyBuckets"
     ],
     "Resource": "arn:aws:s3:::*"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:ListBucket",
   "s3:GetBucketLocation"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:PutObject",
   "s3:GetObject",
   "s3:DeleteObject"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles/*"
   }
  ]
}

At this point the JSON editor should look like this

aws22

Once done click on the Review policy button. Give your policy a meaningful name and description and then click Create policy.

aws11

You will then receive confirmation that the policy has been created.

Now click the browser tab which displays the Create group page.

aws16

To find your new policy, change the filter (located left of the search bar) to “Customer managed” and press the refresh button (located next to the Create policy button). Once you have found the newly created policy, select it and press the Create group button.

aws18

You will now be returned to the Set Permissions Page; ensure the new group is selected and click Next: Review.

The final page is a review after which you can then click Create user.

aws19

Once the user has been created, you will see a confirmation along with a download .csv button. Click the button to download the credentials as these will be needed in our C# application discussed in the next post.

aws20

Review

At this point it is worth getting a cup or glass of your favourite beverage and recapping what has been created:

  1. A new AWS S3 bucket.
  2. A new IAM user. This user has been placed in a group. The group has a policy attached that allows it to perform various operations only on the new bucket that has been created.
  3. A csv file containing the required access and secret keys have been downloaded.

On to part 2

With the S3 bucket and IAM user and the necessary privileges created and configured it is time to move on to part two which will create the .NET console application to upload a file into this bucket.