oraclefrontovik

Sharing what I learn

I have been working through the NG2 book on Angular 5 and became stuck when trying to get an example on Dependency Injection to work.

Here is part of the problematic code I was unable to run:

ngDI-Copy

When running the sample, the page did not load and part of the console message reported by Chrome is shown below:

ngDI3-Copy

This problem was occurring using the following Angular environment:

ngDI2

Starting with the user-demo-component.ts file and after commenting out various parts of the code within this file I tracked the problem to the constructor:


constructor(private userService: UserService) {
}

Fortunately StackOverflow to the rescue and this answer provided the fix.  I made the following change to user-demo-component.ts whereby I added providers to the @Component decorator

...
  providers: [UserService]
...

The @Component decorator should now look like this:

ngDI4-After

Why does the completed example work without this fix?

Good question! If you have read any of NG2 book you will know that all the code examples in the book come with the source ready for you to try out rather than typing. When I checked the example and ran them I couldn’t see a similar change yet the code worked as expected so I will need to continue digging to ascertain why the supplied version works.

I will update the post once I have found out.

Update 23/04/2018

The completed example uses a different version of Angular and typescript. Here is the output from ng –version

ngDI5-After-Copy

The completed example which works uses Angular 5.2.0 whereas the version of Angular I am using is 5.2.9.

This is a far more opinionated post than usual. It is not meant to inflammatory, my goal for this post is to have something tangible that I can point to the next time an Oracle Developer thinks they want or need to “do” incremental commits.

In this post an incremental commit is defined as code which commits a transaction but the cursor is kept open.  They are also known as a fetch across commit.

I encounter incremental commits in PL/SQL code that issues a commit inside a  loop such as the examples below:

Example

...
CURSOR customers_cur
IS
  SELECT columns
    FROM some_tables;
BEGIN
  FOR i in customers_cur
  LOOP
    -- doing something interesting with the row
    COMMIT; -- ARGH!
  END LOOP;
END;

The commit may be decorated with variations of:

IF rows_processed > some_arbitary_number
THEN 
  COMMIT;

Or

IF mod(customers_cur%rowcount, v_commit_limit) = 0 
THEN 
  COMMIT;

Why are they an Anti-Pattern?

They introduce side effects that the Developer is not aware of, usually a self inflicted ORA-01555 Snapshot too old exception.  I will come back to this in the final part of this post.

Why are Incremental commits used?

Over the years I have had many conversations with other Oracle Developers regarding the problems incremental commits cause. The common explanation I have heard for the introduction of incremental commits is that the developer didn’t want to “blow” the rollback segments.

I disagree with this. You should never commit inside a loop. You should commit when your transaction is complete and only then. If your rollback segments are too small to support your transactions then you need to work with your DBA and get them resized.

ORA-01555 Snapshot too old

I am going to spend the remainder of this post explaining why you will see this error when you perform an incremental commit. It will not be a deep dive into all the nuances of this exception just it’s relevance to incremental commits. The best explanation for ORA-01555 is this AskTom post which originally started some 18 years old. Much of what follows is distilled from this thread.

An ORA-01555 occurs when the database is unable to obtain a read consistent image. In order to obtain this information the database uses the rollback segment but if that information has been overwritten then the database can not use it and the ORA-01555 is raised. So what causes this information to be overwritten? In the context of incremental commits the fuse is lit when a commit is issued…

Here is the steps leading to this error taken from Oracle Support  Note:40689.1

1. Session 1 starts query at time T1 and Query Environment 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps ‘3’ and ‘4’.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block’s header that it has been changed and it is later than the required Query Environment (which was 50). Therefore we need to get an image of the block as of this Query Environment.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required Query Environment.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1’s changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

I have marked the key point – 5. By issuing a commit you are saying I have finished with this data, other transactions feel free to reuse it. Except you haven’t finished with it and when you really need it, it will have been overwritten.

Edge Cases?

I am not aware of any edge cases that require incremental commits. If you know of any please let me know via the comments.

Acknowledgements:

This post would not have been possible without the help from the following sources:

AskTom question Snapshot too old

Stackoverflow Question Commit After opening cursor

This post follows on from part 1.  With the AWS S3 objects in place it is now time to create a simple C# console application that will upload a text file stored locally to the AWS S3 bucket.

The first step is to create a test file that you want to upload. In my example, I have created a text file in the Downloads folder called TheFile.txt which contains some text. After creating the text file, note the name of the file and its location.

Start Visual Studio and create a new console application

AWS-dotnet1

Use NuGet to add the AWSSDK.S3 package. At the time of writing this was at version 3.3.16.2

aws21-Copy

Add the following to App.config

 
   
   
   
 

You will find the values for the access key and secret key in the accessKeys.csv which you downloaded in part one of the tutorial.

Create a new class called S3Uploader and paste the following code ensuring you change the variables for bucketName, keyName and filePath as appropriate. As you can see from the comments, this code is based on this answer from Stack Overflow.

For the sake of brevity the code deliberately does not have any exception handling nor unit tests as I wanted this example to focus purely on the AWS API without any other distractions.


using Amazon.S3;
using Amazon.S3.Model;

namespace S3FileUploaderGeekOut
{

  ///

  /// Based upon https://stackoverflow.com/a/41382560/55640
  /// 

  public class S3Uploader
  {
    private string bucketName = "myimportantfiles";
    private string keyName = "TheFile.txt";
    private string filePath = @"C:UsersIanDownloadsTheFile.txt";

    public void UploadFile()
    {
      var client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest2);

      PutObjectRequest putRequest = new PutObjectRequest
      {
        BucketName = bucketName,
        Key = keyName,
        FilePath = filePath,
        ContentType = "text/plain"
      };

      PutObjectResponse response = client.PutObject(putRequest);
    }
  }
}

In the Program.cs class add the following:


namespace S3FileUploaderGeekOut
{
  class Program
  {
    static void Main(string[] args)
    {
      S3Uploader s3 = new S3Uploader();

      s3.UploadFile();
    }
  }
}

Run the program and once it completes, navigate to your S3  Bucket via the AWS console and you will be able to see that your file has been successfully uploaded.

Summary

In this and the previous post I have demonstrated the steps required to upload a text file from a simple C# console application to a AWS bucket.

In this, the first of a two part post, I will show you how to upload a file to the Amazon Web Services (AWS) Simple Storage Service (S3 ) using a C# console application.

The goal of this post is to get a very simple example up and running with the minimum of friction. It not a deep dive into AWS S3 but a starting point which you can take in a direction of your choosing.

This post will focus on how to set up and secure your AWS S3 bucket.  Whilst the next will concentrate on the C# console app that will upload the file.

Dependencies

In order to build the demo the following items were used:

An AWS account. (I used the  12 months free tier)

Visual Studio 2017 Community Edition 

AWS Toolkit for Visual Studio 2017

Creating a new AWS S3 bucket

Log on to your AWS Management Console and select S3 (which can be found by using the search bar or looking under the Storage subheading)

aws1

You should now be on the Amazon S3 page as shown below.aws2

This page give you the headline features about your existing buckets. In the screenshot you can see an existing bucket along with various attributes.

Click the blue Create bucket button and enter a name for your bucket, the region where you wish to store your files and then click next.

aws3

Click Next.  This screen allows you to set various bucket properties. For this demo, I will not be setting any so click Next to move onto step 3

aws4

Leave the default permissions as they are and click Next to move on to the final page.

aws5-2

After reviewing the summary, click Create Bucketaws6

IAM User, Group and Policy

In order to access the S3 bucket from the .NET  application valid AWS credentials are required. Whilst you could use the AWS account holders credentials, Amazon recommends creating an IAM user in order to utilise the IAM users credentials when invoking the AWS API.

In this section of the post I will show you how to create a new IAM user and give it just enough privileges required to interact with our new S3 bucket. The information shown below has been distilled from the AWS documentation.

There are a large number of steps that follow and it is easy to get lost. My advice is to read through once before diving in. If you get stuck (or I have missed something) let me know in the comments.

Return to the AWS Home screen

aws1

Search for IAM, and after selecting users on the left hand side menu, click the blue Add User Button which will bring up the Set user details page.

aws7

Give the user a name and the access type to Programmatic access only. There is no need for this user to be given access to the AWS console.  Click Next Permissions.

Rather than give permissions directly to the IAM user, Amazon recommends  that the user be placed in a group and manage permissions through policies that are attached to those groups. So lets do that now.

From the Set permissions page click on Create Group.

aws8

Give your Group a meaningful name.

aws16

The next step is to attach one or more policies to the group.  Policies in this context defines the permissions for the group. The Create group page lists the available policies but unfortunately there isn’t an existing policy that can be used to ensure that the IAM user has only access to the new S3 bucket, so click on the Create policy button.

This opens in a new browser tab, the Create policy page

aws17

Click on the JSON tab and copy the following. Changing the bucket name as appropriate.  (The source of this JSON can be found here.)

{
  "Version": "2012-10-17",
  "Statement": [
  {
    "Effect": "Allow",
    "Action": [
    "s3:ListAllMyBuckets"
     ],
     "Resource": "arn:aws:s3:::*"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:ListBucket",
   "s3:GetBucketLocation"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:PutObject",
   "s3:GetObject",
   "s3:DeleteObject"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles/*"
   }
  ]
}

At this point the JSON editor should look like this

aws22

Once done click on the Review policy button. Give your policy a meaningful name and description and then click Create policy.

aws11

You will then receive confirmation that the policy has been created.

Now click the browser tab which displays the Create group page.

aws16

To find your new policy, change the filter (located left of the search bar) to “Customer managed” and press the refresh button (located next to the Create policy button). Once you have found the newly created policy, select it and press the Create group button.

aws18

You will now be returned to the Set Permissions Page; ensure the new group is selected and click Next: Review.

The final page is a review after which you can then click Create user.

aws19

Once the user has been created, you will see a confirmation along with a download .csv button. Click the button to download the credentials as these will be needed in our C# application discussed in the next post.

aws20

Review

At this point it is worth getting a cup or glass of your favourite beverage and recapping what has been created:

  1. A new AWS S3 bucket.
  2. A new IAM user. This user has been placed in a group. The group has a policy attached that allows it to perform various operations only on the new bucket that has been created.
  3. A csv file containing the required access and secret keys have been downloaded.

On to part 2

With the S3 bucket and IAM user and the necessary privileges created and configured it is time to move on to part two which will create the .NET console application to upload a file into this bucket.

IMG_0649-Copy

I try to attend a developers conference once a year and this year I attended NDC London 2018.  I was surprised that only two developers out of the many I had asked  before going knew of the NDC brand of conferences.  I discovered NDC thanks to Carl and Richard of .NET Rocks! Thanks gents, I owe you.

Just in cases you are not aware of what NDC is, here is a brief description courtesy of the NDC London site.

About NDC

Since its start-up in Oslo 2008, the Norwegian Developers Conference (NDC) quickly became one of Europe`s largest conferences for .NET & Agile development. Today NDC Conferences are 5-day events with 2 days of pre-conference workshops and 3 days of conference sessions.

NDC London

In December 2013 NDC did the first NDC London conference. The conference was a huge success and we are happy to announce that the 5th NDC London is set to happen the week of 15-19 January 2018.

I didn’t attend the pre-conference workshops so my NDC adventure started on Wednesday. First impressions of the conference and its venue, The Queen Elizabeth II Centre, Westminster were superb; upon arriving there were no queues to register another plus was that the cloak room was free which although a small touch was one I really appreciated.

Throughout the conference continuous high quality hot drinks and food was served. Starting with cakes and pastries and moving on to a variety of hot food.  I wouldn’t normally mention food at a conference but it was of a standard that I had not encountered at other conferences that I had to write a few lines about it.

IMG_0657-Copy

My reason for attending was to hear a number of people whose podcasts I listen to, books & blogs I have read or have helped me by providing answers on Stack Overflow speak. As a newbie to this conference I did not know what to expect from the talks but I was not disappointed and for me the conference experience went into the stratosphere from here on in.

The talks are scheduled to last one hour and as you will see from the agenda they are on a wide variety of subjects. The presenters did not disappoint. There was no death by PowerPoint, no about me slides, no err/errms or the dreaded “like”.  The presenters were passionate about their topics and were clearly enjoyed themselves engaging with their audiences. Some had a slight more conversational style whilst others used self deprecation and one in particular used the medium of song (thank you Jon Skeet that was unforgettable). One of the common traits that I noticed is that many of the presenters are building and experimenting with “stuff” all the time.

As is the norm after a talk\session the audience are invited to give feedback and NDC has probably the best I have so far encountered.  As you leave the room after a talk just throw a colour in the box. Brilliant.

IMG_0602-Copy

Here are the sessions I attended:

Wednesday

Keynote: What is programming anyway?
Felienne

Sondheim Seurat and Software: finding art in code
Jon Skeet

You build it, you run it (why developers should also be on call)
Chris O’Dell

I’m Pwned. You’re Pwned. We’re All Pwned.
Troy Hunt

Refactoring to Immutability
Kevlin Henney

Adventures in teaching the web
Jasmine Greenaway

C# 7.1, and 7.2: The releases you didn’t know you had
Bill Wagner

Thursday

Building a Raspberry Pi Kubernetes Cluster and running .NET Core
Alex Ellis & Scott Hanselman

An Opinionated Approach to ASP.NET Core
Scott Allen

Who Needs Dashboards?
Jessica White

Hack Your Career
Troy Hunt

HTTP: History & Performance
Ana Balica

Going Solo: A Blueprint for Working for Yourself
Rob Conery

NET Rocks Live with Jon Skeet and Bill Wagner – Two Nice C# People

Friday

The Modern Cloud
Scott Guthrie

Web Apps can’t really do *that*, can they?
Steve Sanderson

The Hello World Show Live with Scott Hanselman, Troy Hunt, Felienne, and Jon Skeet

Tips & Tricks with Azure
Scott Guthrie

Solving Diabetes with an Open Source Artificial Pancreas
Scott Hanselman

Why I’m Not Leaving .NET
Mark Rendle

Summary

NDC London 2018 was the best conference I have ever attended. I have returned from it motivated to do more; to experiment and try stuff that I hadn’t even thought about.

There were so many highlights for me but having my photo taken with Carl and Richard was the best. Seriously guys you rock!

IMG_0537-Copy
 

Books2017

Looking back at the technical books I had read in 2017, the biggest surprise is that I didn’t read any books on Oracle which I think is the longest time I have spent between Oracle books. This hiatus will not last long into 2018 because of the imminent launch of Pete Finnigan’s new book

The four books I did read took me far away from my comfort zone and two of the four have been screaming bargains (HT to Seth Godin) with what I have learnt from them.

Microsoft C# Step by Step 8th Edition

This was the first technical book I read this year.  As I continue to learn C#, I look to buy any and all introductory C# books to read different authors descriptions of the language fundamentals.

The book is well structured with nice end notes that recap what the chapter has covered. In addition the code examples were complete and easy to follow. Despite all the positives the book didn’t really grab me and after the first few chapters it became a bit of slog to get through so I didn’t finish it. Not a bad book by any means just not one for me.

Adaptive Code 2nd Edition

This is my favourite technical book of the year. It has stretched me further that I thought possible and has taught me so much.

It is split into 4 parts. Part I Is a good overview of Agile development frameworks; Scrum and Kanban, Part II Focuses on Dependency Management, Programming to Interfaces, Testing and Refactoring. Part III covers the SOLID principles and Part IV Dependency injection and finishing up with Coupling.

Although not a huge book at 421 pages it has taken the best part of six months for me to read and understand about three quarters of the book. I feel I will be revisiting specific chapters for a long time to come as I have only just scratched the surface with the valuable information that this book contains.

One minor criticism is that not all the code examples  can be run, you are given a fragment of code that you may wish to play with to see the different results of changing x and y or just to get a better understanding of the topic being discussed but this is not always possible. That aside this is an easy book to recommend.

MongoDB The Definitive Guide 2nd Edition

This year I have been experimenting with a number of C# console applications that that use NoSQL databases. Rather than endlessly Googling for information, I thought I would buy this books to get a good grounding in MongoDB especially when it comes to security.

I bought the 2nd edition of this book which is now out of date and I quickly lost confidence in it and returned to googling for information and using the official MongoDB docs.

Dependency Injection in .NET

Dependency injection (DI) was a technique hitherto unknown to me. Although discussed in Adaptive Code 2nd Edition I felt I need to find out more and hear what other peoples opinions.  One other point which piqued my interested was the difference when a blog post that is referenced a lot by answers on Stack Overflow describes DI in 2 pages of A4 sized paper yet there is a 400+ page book on the subject.

I bought Dependency Injection in .NET because of two reasons, firstly it is focused on .NET which I am currently learning and secondly the overwhelmingly positive reviews on Amazon.

The book is split into 4 parts. Part I naturally starts with an overview of the problem that DI solves with a simple example that is initially written without using DI followed by it being rewritten to use DI. The next chapters move on to a bigger real world example. Part one closes with a look at DI containers.

Part II covers DI patterns and then interestingly Anti Patterns and then DI Refactorings. Part III looks at DIY DI and Part IV takes an indepth look at DI containers such as Castle Windsor, Structured Map and so on.

At the time of writing I am on page 133 which is the start of the DI anti-patterns. I won’t be reading much further as I feel I have gotten as much as I can from this book for the time being but as my experience in OO languages grows I will be back to correct bad habits and learn how to get the best out of the DI containers that I may be using.

One other interesting point, is that the cover of this book has for reasons I do not know has gathered more comments from people passing by my desk than any other book I have owned!

Conclusion

I have gained much from reading these books (yes even you; MongoDB Definitive Guide) They have all added something to my skills as a developer and given me different ideas and solutions to problems that I currently face and am yet to face.

Can’t wait to see what technical books I read in 2018 will be…

I am very excited about the potential of Live Share since it’s announcement back in November.

In addition to the content on the Live Share site, I would also recommend listening to this episode of Hanselminutes where Scott talks to Amanda Silver about how Visual Studio’s Live Share goes far beyond “text editor sharing” to something deeply technically interesting.

Although still very early days, Live Share is definitely something worth keeping an eye on. A private preview is coming soon and you can sign up for for more information here.

Categories: C#