Distracted Progress

spark.01

There’s a lot of distractions that run across your desktop, kitchen table, cafe table, and your mind throughout the day.  So I decided to block them all for this long Memorial Day weekend and dive deep into a new web thingee that’s been getting a lot of chatter and traction in the developer community. I set a simple goal after asking myself, “what do you want to accomplish with this deep dive, given the time you have to spend?”

 

I wanted to get through the first three chapters of the online tutorial and get three things:

1)  figure out what I don’t know and should

2) learn the lingo of the this particular language so my Google and Bing foo doesn’t suck and I can find what I need;

3) stick with the tutorial as close as I can and get my laptop setup so I can do my web development going forward;

4) build something that works and has some tests built into so I’m not just slinging code, but testing what I sling.

So the effect of doing something like this is that (for me anyway) is I want to keep going and go deeper and understand more – part of my Magic Factory initiative I blogged about last month.  My goal is to finish this tutorial and actually move into some more advanced content and site construction.  But, baby steps – always baby steps with this type of thing so I glaze over something I need to “get” and would miss if I’m in a big hurry.

j@s

So, how did this blog get here?

There was a time when I knew and spoke with many developers across my region (FL, GA, TN, OH, MI) who weren’t very connected to one another’s efforts and successes – and lessons learned. So, for the Florida-based coders I was thinking how cool it would be to have a Florida Coders United blogging area, or portal.  I started with that idea on Blogspot over a decade ago and ended up here, and took the handle of “onefloridacoder”, that’s where my handle came from.

Blogging about my own lessons learned, tips, tricks, and efforts was just “ok”, but getting up in front of a group of people at a talk, or sitting around a table full of beverages (beer, coffee, water, soda, etc.) was much more powerful.

Still, I find through a lot of conversation over the years that my fellow developers that do blog a bit, do it for the sake of “OH!  I figured it out! Let me leave a trail of bread crumbs so I can find this in the future.”  Some developers blog about other things going on in their lives, not much, but some do here and there.

So, the logo for Florida Coders United would have looked something like “FcU” – yeah, there’s a subtle pun in there but it wasn’t the intent.  At any rate – I migrated from Blogspot, to GeeksWithBlogs, to a Microsoft hosted blog – then finally had that converted it to a WordPress blog as “john@scale”.

This is why, and how, this blog got started in case you were interested.

j@s

The Magic Factory

magic.factory

I recently discussed the “Magic Factory” with my girlfriend to see what she thought about a Grand ReOpening.  Oh, what is the Magic Factory? It’s a place I walk into mentally with one, or two, maybe three ideas and blue sky what might happen if I built applications around those ideas.

I’ve had a bunch of ideas up on AgileZen waiting to be unpacked and built but just lost the fire last year about this time to pick up my tools and start writing software again until this conversation happened.  But after trying all last year to relight the fire, all I could render was a spark and that was about it.

Then last month our shop announced Hack Day 2014 – hmm… how could I pass that up, right?  Our shop is going to give us 24 hours to build something from scratch then present to leadership the next day.  The 1st, 2nd, and 3rd prizes were pretty sweet, but it wasn’t about winning for me – it was a test to see if I could start and finish – something you take for granted when you a bit less gray over the ears.  So, freakin’ sign me up!

I found a great (and smart) peer on my team who wanted to help build something in 24 hours from scratch.  I thought this would be the real test to see if I could pry the doors open on this place I used to spend so much time in.

We didn’t win the grand prize, or make the final cut – but we built an app in 24 hours that conveyed the business idea we wanted to promote.  So, now with the doors to the magic factory swinging with activity and visitors like my grandma’s front porch screen door, things feel a bit more normal now that this place is open.  Oh, by the way, welcome to my “Grand (Re)Opening!”

 

Mobile First. Cloud First.

TitanFall

A few weeks ago I listened to Scott Guthrie discuss mobility and how they relate to Azure, and vice versa.  I knew a bit about the mobile pieces, but the Azure side of the talk was pretty jammed up with new/updated services and offerings from Azure.  This platform has really come a long way in the last fours years for sure.

One of his first Azure talking-points was about IaaS and what it could mean to developers.  Enter TitanFall.

He discussed some of the elastic infrastructure the development and program folks were using to prop this game up – all over the world.  It was a pretty amazing aspect.  From this part of the talk we heard the quotes “Deploy at the Speed of Light – on your terms” and “Compete in a global market, but close to your customers”, “constantly available resources”.

All of those Azure or not, have a very nice ring to them – and true from what we saw in the TitanFall highlights.  So from what I can tell from the TitanFall preview, the size of your application isn’t as large of a problem like it has been in the past; IMHO the understanding of the architecture needs to really matter – how do all of the Legos fit together and if they talk to each other at all – who talks to who, when, and why.  That statement holds water in a lot of scenarios, today and yesterday however I think it fits in with future goals we set for our solutions today.

TitanFall has a lot of “headroom” to grow into, and the development tools are pretty sweet at this stage. And Scott promised they’d get better and better as time goes on.  Awesome!  If you’ve used the older versions of the Windows Azure portal, SDKs, and Visual Studio integrations, you know they’ve all matured into things that remove friction from our daily development goals.  The notion they’ll mature more is even more awesome (yes I use the word awesome quite often).

The statistics he shared from Azure were just as impressive, here’s a few screen shots:

1) The footprint of the data-centers around the world, and a few more coming online soon;

2) Interesting adoption stats ranging from authentication to Visual Studio Online registrations, to requests per second.

3) An updated portal dashboard that displays the development, production, and financial concerns of the portal owner – very slick indeed!

azure.footpring azure.stats portal.makeover

Try Not To Boil The Ocean

Do you use TDD to help shape your application’s design?  I do, although I’m not as strict as I used to be.  I used to test eeeverything, now, not so much.  I was a fly on the wall during some Agile coaching sessions from “The Dude” at my shop; he kept making a statement during one particular coaching session – “Let’s not try to boil the ocean, ok?”

I know he’s not the first one to say this, it’s a pretty common saying.  The context was around stories, epics, and setting up a story map and making discrete stories that can simply organize work items for the team to work on – small bites, Legos – pick your analogy.  It just allowed for the story map to flow better and to give the team context around what’s next, and what “it” is what’s being accomplished.

I try to do the same thing with TDD – as I’ve mentioned in other blogs, testing the framework isn’t as valuable as testing your own code, right?  I mean, if there’s a bug in a(the) framework you’re using you’re going to find it sooner or later if you are pressing any particular namespace pretty hard.

One thing I also learned from Uncle Bob was that TDD can help you understand a framework – probably not as much as Code Kata, but so TDD helps me understand the namespace(s) I’m working in and keeps the context and responsibilities of the classes tighter and more cohesive.

Moving on…

I started out trying to boil the ocean for the Windows Azure Media Services (WAMS) because I didn’t get how everything was wrapped together and how things were processed.  There’s things you can’t mock, but some things you can.  There are many interface types you can implement to make your own mocking types for quicker tests, and understanding these interfaces once they’re implemented inside your own concrete types helps tell the story of what they’re doing.  Here are a few that I worked with extensively while I was baking my infrastructure classes: IAsset, IJob, and ITask – I got a lot of mileage (and headaches) unraveling and re-wrapping this stuff for my little brain.

Reducing the temperature from 212F (100C) to a light simmer…

I started with the business layer tests, the thought being I’d spin up some fast(er) running tests in that space first, then move into the infrastructure layer(s).  The effect was my business layer tests had too much code in them.  I was building up some of the infrastructure types while I was building the business layer types and tests – not good.  So I stopped and just focused on the infrastructure bits.  Here’s how the infrastructure assemblies looks right now:

infrastructure

infrastructure tests

There’s four separate services I’ve created for interacting with WAMS.  Each service is something you can do with the WAMS portal, and I wanted to break it up like that, the only thing missing is a publishing service that actually tosses the media over the wall and a URL is assigned to the content you’ve uploaded and encoded.

I’m not using an IoC container yet to spin up concrete types for me, so to keep the tests simple for now I’m just building the concrete types by hand.

 

This was all I need to do to keep the infrastructure types from polluting my business unit tests.  Most of these infrastructure tests are bouncing up against WAMS for now, but my business layer test will be using a few mocked out types using the interfaces I mentioned above.  Once the publishing service and the business layer tests are built out I’ll blog those, but now I’ve got to put some thought into how this application is going to use the business layer, so it’s time for a little (more) story mapping.

HTH

oFc

Starting With Another Clean Slate

Starting out with a sample from here, I wanted to build out some reusable libraries I could use with other projects if needed, but focus the focus of the libraries on a new application which can take advantage of Azure’s Media Services.  I also learned a few things from my last application I built, and I want to incorporate those bits into the solution as well.

Spinning Up Some Media Magic

I was looking over some old app designs I shelved last year, they needed some support from something like Windows Azure Media Services, but they weren’t baked last year;  maybe the preview was out when I last looked, not sure.  Recently, I’ve been toying with the idea of getting one of these applications stood up using WAMS now that they’re ready.

I used this chain of posts to get a small harness built and get an understanding of what this stuff does and how it works together to create assets, then encode and stream in various formats, or convert the formats.  The sample code is a bit of a fire house, but the post(s) explain what’s going on with each chunk coming out of the fire-hose.

What I wanted to share is related to a post where I was fighting with a few assembly references and how they were mixed up and giving me a rash.  I asked NuGet for the WAMS bits and here’s what installed:

WAMS.Dependencies

I remember using some of the 1.7 assemblies, but not the 5.1 assemblies;  not a huge thing, but if the other project I was working on last week got mingled with this one, there might be some challenges.

The WAMS stuff looks fun, and its something new for me, I’ve not done much of this stuff except for using OS apps to convert media files from one format to another; so the curve may be steep starting out but yesterday I flattened it a bit and have a working harness that’s using my WA dev account.

HTH / oFc

Refactor and Review

Review.And.Refactor

I’ve started to bring this “spike” to a close since I’ve set out to figure out the stuff I needed to using a few Azure services and got to a point of refactoring.   When I get to this point with a spike, I set out to do the following things listed in the task list you see here.  I’ll step through the why for each of them.

  1. Refactoring for Interfaces – I tend to build up a few concrete types which don’t need to be added to an IoC container for injection, I try not to abuse Unity even though it’s pretty awesome.  Until I find out that maybe a member needs to be injected to reduce a bit of coupling across the solution.  So, I’ll look for these opportunities across the solution and extract an interface and either add it to the container or do some poor man’s injection w/o the container – it just depends on the context of the type’s usage.  I agree, more interfaces are better, even marker interfaces serve a purpose but I try not to go crazy with anything, interfaces included.
  2. Logging – At this point, I know more than I did a few weeks ago and I’ve got a clearer idea of what I want to log.  This time I just need to build up the event source class for the app based on what I learned.  That’s it.  No more, no less.
  3. Magic Strings and Numbers – This one is special. I litter the application with strings and sometimes numbers and this is the best time to go back over the entire solution to pull them out and into something like constants, that’s what worked for this exercise.  I’m walking through all of the code to see if it makes sense, especially the bits that I’ve not seen in a few weeks.  I forget sometimes what I was thinking, and clarifying with a better member or method name is always better than adding comments.  And yes, I’ve got a small battery of tests to fire off after each changeset gets checked in.
  4. Plumbing for cross-cutting stuff – Now that the logging events are done, I need to plant them in the classes that are doing the work.

Not much here, just some habits I’ve been using over the years to keep solutions clean, readable, and hopefully maintainable.

HTH / oFc

Azure SDK and SLAB

Rosetta StoneRight before I skated out of the office to start my four-day Memorial Day weekend I jotted down a few notes to blog about which outlined what worked for me using the P&P Semantic Logging Application Block (SLAB), as well as a few other sticky points while I was trying to stand up couple Windows Azure roles.

It’s really easy to get tangled up with broken builds when you introduce an updated library/dependency into your projects.  And going backwards a few minor versions just feels wrong when you want to “ship” with the something you want to stay current as long as possible.  While NuGet was handy at keeping everything up to date, it did get in the way at times when I need to swap around a few bits.  On a side note, the Package Manager CLI makes removing and adding things a whiz!  I’m still a huge NuGet fan.

I started out with a 4.0 framework solution, then moved it to the 4.5 framework so I keep it at, or start at, 4.5.  There was nothing wrong with 4.0 until I started adding the dependencies in.  For example, the 2.0 Windows Azure (WA) Storage, Runtime, and Configuration bits I installed from the WA site (http://bit.ly/v5MF7m) install v2.0.5.1, and those assemblies ran on top of 4.0 just fine.  This got me through the bulk of the coding to stand the application up.

Then in a previous post I mentioned I wanted to add in some logging pieces and wanted to give SLAB (http://bit.ly/Vh3Umz) a go.  The sample code didn’t work with what I had on my box for some reason, which gave me a clue I might have some depedency issues coming at me with my code.  The P&P code was easy enough to read through but the namespaces had a few changes at RTW time that I needed to work around, and was expected.

In addition to this, I had to work around issues between WindowsAzure.Storage and the MS.Data.Edm, MS.Data.OData, and System.Spatial assemblies.  I started out satisfying the dependencies with using v5.0.2.0 for them, but *really* wanted to make the latest, v5.4, work, but most of the blog posts I read through confirmed v5.0.2.0 was the way to go.  So the final fall back to get everything happy was to use v5.2 for the Edm, OData, and Spatial assemblies.    Then I learned of the (v5.0.2.0) Edm, OData, and Spatial dependencies.

Inside the SLAB reference application, the console application project took a dependency on an earlier version of Edm, OData, and Spatial and I didn’t want to go that far back – stuff should just work, right?  And it finally did. The logging pieces weren’t throwing anything, but they weren’t working.  The table used to persist the logging entries wasn’t getting built (if it didn’t exist) inside local table storgae, so nothing was being logged.

The answer was to use the Edm, OData, and Spatial v5.2.0.0 assemblies. Bingo!  Everything was firing, and all my tests were still passing so it was very green day once I got all of the zen sorted out in my assembly references.

Below is a list of assemblies with their dependencies sans the ones that come OOTB; hopefully this can help someone bumping into this as well.

HTH / oFc

WebRole (MVC4) Project

  • Microsoft.Practices.EnterpriseLibrary.Common v6.0
  • Microsoft.Practices.EnterpriseLibrary.Logging v6.0
  • Microsoft.Practices.Unity v3.0
  • Microsoft.Practices.Unity.Configuration v3.0
  • Microsoft.WindowsAzure.Storage v2.0.0.0
  • Microsoft.WindowsAzure.StorageClient v1.7.0.0
  • Unity.MVC4 v.1.1.0.0

WorkRole Project

  • Microsoft.Practices.Unity v3.0
  • Microsoft.Practices.Unity.Configuration v3.0
  • Microsoft.Data.Edm v5.2.0.0
  • Microsoft.Data.OData v5.2.0.0
  • System.Spacial v5.2.0.0
  • Microsoft.ServiceBus v1.7
  • Microsoft WindowsAzure.Configuration v1.8
  • Microsoft WindowsAzure.Runtime v1.8
  • Microsoft WindowsAzure.Storage v1.8
  • Microsoft WindowsAzure.StorageClient v1.7

Infrastructure Project

  • Microsoft.Practices.EnterpriseLibrary.SemanticLogging  v1.0
  • Microsoft.Practices.EnterpriseLibrary.SemanticLogging.WindowsAzure  v1.0
  • Microsoft.ServiceBus v1.7
  • Microsoft WindowsAzure.Configuration v1.7
  • Microsoft WindowsAzure.Runtime v1.7
  • Microsoft WindowsAzure.Storage v1.7
  • Microsoft.ServiceBus v1.7

Table Storage Hiccups

So, like most of us, I thought hooking up table storage would be easy.  Turns out, it’s not if like to keep up with the latest and greatest support libraries.

The first hiccup came when I tried to connect to my local storage service (UseDevelopmentStorage=true).  Well, now you need to specify the URI that points to the local storage emulator and like the blog post author states, “it just magically works”

Now that I had the table sitting there, the next hiccup came when I asked my table service context to build the table if it doesn’t exist.  This one was kind of bad, not something I expected.  First I used the Azure SDK library my exception told me I needed, in this case Microsoft.Data.OData (v5.2.0.0).    There’s a newer version of it as well, v.5.4.0.0, that didn’t work either even with an assembly binding redirect.  I installed v5.4 with NuGet and then had to uninstall it manually by clearing out the package.config entries and dropping the references.  Then using the Package Manager Console I installed v.5.0.2.0 instead w/o the binding redirect and it worked.

The last hiccup was getting the entities to insert into the table.  You can find more about CRUD here, and Table Storage CRUD operations here.  A side note on that embedded link, it discusses versions 1.7 and 2.0 – yesterday I was looking at 1.7, not 2.0 so check which form of the page you are looking at before you start coding.

My partition key was missing and the row key and the timestamp properties were null.  I read that the table client took care of this for me, so I used DateTime.UtcNow for a default property, and fixed up my extension method that gen’d my TableEntity for me.  That error came in the form of [ The remote server returned an error: (400) Bad Request ].

Here are the links I eventually found that bailed me out today.  All the code is working and I added a few more guard statements to do some additional state checks before the CRUD bits fire.  Hopefully, the blob storage stuff is ironed out implicitely as well with all of this dependency churn today.

HTH / oFc

Differences between Azure Storage Client Library 1.7 and 2.0

If you decide to use v5.0.2.0 read this.

What Azure allows inside a table entity during CRUD operations

How to use the Table Storage Service