Table Storage Hiccups

So, like most of us, I thought hooking up table storage would be easy.  Turns out, it’s not if like to keep up with the latest and greatest support libraries.

The first hiccup came when I tried to connect to my local storage service (UseDevelopmentStorage=true).  Well, now you need to specify the URI that points to the local storage emulator and like the blog post author states, “it just magically works”

Now that I had the table sitting there, the next hiccup came when I asked my table service context to build the table if it doesn’t exist.  This one was kind of bad, not something I expected.  First I used the Azure SDK library my exception told me I needed, in this case Microsoft.Data.OData (v5.2.0.0).    There’s a newer version of it as well, v.5.4.0.0, that didn’t work either even with an assembly binding redirect.  I installed v5.4 with NuGet and then had to uninstall it manually by clearing out the package.config entries and dropping the references.  Then using the Package Manager Console I installed v.5.0.2.0 instead w/o the binding redirect and it worked.

The last hiccup was getting the entities to insert into the table.  You can find more about CRUD here, and Table Storage CRUD operations here.  A side note on that embedded link, it discusses versions 1.7 and 2.0 – yesterday I was looking at 1.7, not 2.0 so check which form of the page you are looking at before you start coding.

My partition key was missing and the row key and the timestamp properties were null.  I read that the table client took care of this for me, so I used DateTime.UtcNow for a default property, and fixed up my extension method that gen’d my TableEntity for me.  That error came in the form of [ The remote server returned an error: (400) Bad Request ].

Here are the links I eventually found that bailed me out today.  All the code is working and I added a few more guard statements to do some additional state checks before the CRUD bits fire.  Hopefully, the blob storage stuff is ironed out implicitely as well with all of this dependency churn today.

HTH / oFc

Differences between Azure Storage Client Library 1.7 and 2.0

If you decide to use v5.0.2.0 read this.

What Azure allows inside a table entity during CRUD operations

How to use the Table Storage Service

Technical Evaporation…

This is the process of getting some data out of the local environment and up to the cloud – actually this is what evaporation is – the moisture has to come from somewhere whether its my backyard or the Gulf of Mexico and get up into the atmosphere and form the cloud to make the rain, right?

Like I mentioned, in the previous post I have been using mock data to get some of the view plumbing working so now I need to actually being taking things off my Azure Service Bus queue and putting them somewhere else.  I didn’t think about this b/c I just wanted to let the app’s development process lead me to the next feasible (and cheapest) choice for storage and persistence.  And the thought in the back of my mind is something that Ayenda Rahein said in a keynote a few years ago… “we should be optimizing for persisting data, not retrieving it.”  Yeah, pretty big statement, but it’s true.

So, the simplest thing I could do would be to persist everything into Azure table or blob storage until there’s some requirement for reporting then maybe a database can come into play.  I have to pay extra storage and compute costs on my Azure account to host a database and I don’t really need a full blown database instance right now.  I can just figure out what I need by stuffing things into something flatter and cheaper.  But, if I code this right, I should be able to move it into an instance if something triggers that need.

Moving on.

I settled on blob storage for my logging and table storage for my application data.  I built my code from this post which had some quick and dirty examples about accessing Azure’s table storage.  It turned out nice so hopefully it can be extended for other table storage persistence needs in the future.  Not as generic as I would have liked it, but it works for now.

Now, the app is throwing something somewhere b/c the local worker role is puking and coughing pretty hard right now.  So, back to my earlier post from yesterday, even though my tests are passing I need something to catch this junk so I look at it.  This will allow me to wire up the blob storage client – oh, there’s the PowerShell stuff to get to as well.  Should be a full(er) day today.

HTH / 0Fc

What Happens When It Stops Raining?

sunnyday

When we notice our cloud has stopped raining, it’s time to take a look under the hood to see what happened?  Or, is there a better place to look before we raise the hood?  A few questions to ask:

1) Was it something I did?

2) Was it something that happened inside one of the Azure instances?

3) Did the application run out of work?

4) Where can I look to see what was going on when it stopped?

Only you can answer the first question.  If all of your tests aren’t, or weren’t passing, and promoted something to a production instance you might be able to answer this fairly easily.

The second question assumes you’ve can get to your management portal and look at the analytics surfaced by Azure.  There might have been, or might be, a problem with one or more of your instances restarting.  I’ve never seen either of my instances stay down after a restart unless there was an unhandled exception getting tossed around.  Usually I find these problems in the local dev fabric before I promote.  Sometimes I don’t though, so on a few occasions even though my tests were passing I had missed some critical piece of configuration that my local configuration had, and the cloud config was missing.  I call this PIBKAC – problem is between keyboard and chair.  Usually the analytics are enough to tell you if there were problems.  And from there you can fix configuration if needed, or restart your instances or other Azure feature you’ve got tied to the application.

The third question is kind of a sunny day scenario where the solution is going what its supposed to in a very performant way.  However, sometimes ports can get ignored b/c of a configuration issue like I mentioned prior as one example.  If you’ve been storing your own health monitoring points you can probably tell if your application has stopped listening for new requests, or simply just can’t process anything.

The fourth question talks about having something that’s looking around the instance(s) and capturing some of your system health points: how many messages am I receiving and trying to process; how quickly am I processing the incoming messages; are there any logs that can tell me what was going on when it stopped raining.

I’ve been using Enterprise Library from the PnP team for >6 years and I still love the amount of heavy lifting it does for me.  The wire-ups are usually easy and straightforward and the support behind each libary drop is constant and focused.  Recently Enterprise Libary 6 dropped with a bit of overhauling to target 4.5 among other things, and here’s a blog post by Soma that discusses a few at a high-level.

I’ve used the Data and Logging Application Blocks, as well as Unity successfully.  I had recently started wiring my solution to use the Azure Diagnostics listener to capture some of the diagnostic events, particularly instance restarts from configuration changes.  Now, I think/hope I can use the logging application block to wire all of my logging events and push them to something simple like blob or table storage.

I’ve never like a UI that I have to open up and look through, it just makes my eyes tired and its annoying – I’d like to have something a little more easier to lookup fatal and critical logs first then go from there.  PowerShell (PS) looks cool and fitting for something like this, and I can probably do something quick and dirty from my desktop to pull down critical, fatal, or warning logs but I’m not a PS junkie.  But it would make for an interesting exercise to get some PS on me.  Oh, on a side not I picked up this book to (re)start my PS journey and so far it’s been worth the price I paid.  Some of the EntLib docs mentioned pushing data to Azure storage so I may just start there to see if this can work.

Here’s the doc and code downloads if you want to take a look around.

HTH /oFc

Black Beauty Rides Again…

work.and.play

Well, it’s not like she’s been off the road for a while but with all of the rain last week I just thought I’d be safer and not challenge the elements and wait.  And the wait was so worth it.  I mapped out a ride down yesterday morning with a burger stop in Lakeland – it was a great ride!  Here are the details: http://bit.ly/10dPXnJ  This should be a public page, sorry if it’s not but it may be b/c it’s part of the Harley Owners Group Ride Planner site.

About 1/2 way to Lakeland I had to stop to clean the love bugs off my helmet.  It was pretty bad, and nasty but ’tis the season here in Florida.  My bike was pretty covered in them as well but it just needs a quick bath and it’ll be good as new.  The ride was just over 100 miles and it was such a great day of riding.  Great for clearing out your mind and enjoying the beautiful state of FL.

/oFc

Squeezing The Rain Out Of A Cloud

mostly.cloudy.rain.xlarge.mod

This week’s weather has been overcast and cloudy, and this morning the forecast looks like the image above (according to an online weather source).  So I thought I’d share a few things I worked through yesterday and this morning to get some rain out my cloud.

Here’s what the solution is using, so far:

MVC 4 Internet project – and all the scaffolding bits it gives us. Nuget packages only include Unity.Mvc4 & Microsoft.ServiceBus right now.  I’ve got some ideas for logging that’ll use some of the PnP Enterprise Library goodness to wire-up some business and system logging for this solution, as well as tap into Wasabi for scaling & throttling ideas I have for the solution – more on the last two later on though.

The solution is pretty simple, and that’s how I’d like to keep it for now.  But I needed to start pulling some mock data into grids to start with, and historically being a server-side coder I started out trying to unload my collections with some of the HTML helpers Razor exposes to us.  But it really just didn’t feel right and I found myself trying to hack something together to get some data into the view.  The night before I was having a conversation with John Papa around UI/UX content enablers.  He makes all of this stuff look so easy, so I thought, sure I can do this too so I set out to jam in a JQuery plug-in to spin through my data.

I settled on jqGrid with only a few minutes of looking around the jQuery home page.  So not being the client-side junkie most folks are these days I started Binging for code samples.  After a few hours (seriously, I’m not that smart and somethings take longer than it should) I found a post on Haack’s site that talked about what conventions need to match and which ones don’t.  Oh, and after a minutes I fixed the signature on my controller action and the code started lighting up.  Now it’s only formatting I need for a few views which I won’t burn too many cycles on.

Using Phil Haack’s example, I had jqGrid running in a few minutes.  But I will say this – I did learn alot about the Html Helpers ASP offers, they are powerful and very handy.

The side of effect of this choice was larger than I thought though and required a bit of refactoring.  The view receiving this data was strongly-typed with  IEnumerable and now that the data was coming from a jqGrid call to an action that returned a JSON payload, I didn’t need that.  The repository method that was serving the data to the controller looked funny now.  I needed to scope the method to just return the requestor’s data, not all of the data.  I may still split this interface up because of ISP, but I’ll keep my eyes open for code smells just in case.

So, there’s a bit of refactoring going on today before I hookup the Azure persistence piece which is next.  I haven’t quite figured that out yet, but soon.  The easy part about this is I can still target my Azure Service Bus queues, and tap into the local storage emulator on my local box until I figure out data and or document shapes for the storage thingee.

Here’s a gist with the view and controller source.

HTH /oFc

Living Inside a Cloud Should Be Easy, Right?

crescent-wrench

It’s been a few months since I dropped the Azure SDK on my desktop and the tooling and changed considerably to say the least.  The portal changed a bit as well, but once you get used to it, it just works for you, and unlike before you can see everything that’s going on “up there” at a glance.

However, back in the IDE, particularly inside the code there are more pieces that are supposed to bolt up to your Azure code.  And if you’re using an MVC 4 web role you can push in a NuGet package called Unity.Mvc4 with comes with the this handy little bootstrapper you can use to load your Unity container using the UnityConfig class that runs next to the bundler and routing configs in the App_Start folder.

This was one thing that I didn’t realize was new to the MVC 4 scaffolding.   These config classes help keep things we’ve piled into the Global.asax for a long time.  And the UnityConfig class follows suit nicely.

The idea with the bootstrapper is to help keep the type mappings contained, but loading when the app domain spins up each time.  All of the other pieces appear to act the same, i.e. life-time management, aliasing, and child containers.

The last thing I’ll mention about things fitting together is when I started this solution months ago, I was using a previous version and the upgrade wizard didn’t fire-up so I didn’t get a bump on my web role “and that’s when the fight started”.

If you’re trying to preserve your old solution and you’re trying to get it to act like an MVC 4 template, don’t.  If you don’t get the version bump from the IDE, stop.  Create (or add) a proper MVC 4 project from the project template dialog and go from there.  Copy your code to the new one, fix up the usings and references and keep going.

While I was doing this refactoring and sorting out my existing unit tests the code started to thin out and I realized that the MVC 4 bits could do what I was making the older MVC project do.  It just took a bit of frustration and brute force to recognize this and keep coding.

I had the unique pleasure of deleting a lot of code, and still have everything work, well.  Just had to sync with the tooling and the way things are supposed to fit together now.  Same tools, just a different approach sometimes when the bits are out in front of the IDE.  Not a bad thing, just different, and better.

*** Update ***

So I didn’t need the UnityConfig class anyway.  The NuGet step actually plugged the bootstrapper into the root of the website and exposed a static RegisterTypes(IUnityContainer container) method that handles the mappings.  I usually don’t wrap my type registrations in code, but rather in the configuration file so I can easily add types on the fly.  The bootstrapper exposes a static method that handles returning the container.  Here’s a code snippet with one registration added.

Bootstrapper

 

 

 

.

 

HTH /oFC

Clouds Need to Make Rain, right?

Clouds Need to Make Rain, right?So I’ve been working on this cloud stuff off and on for a few months now.  And while the cloud vendors try to make it easy to work with cloud stuff, things aren’t always intuitive unless you clear your mind and don’t try to do what you remember, but actually how you’re being told they need to work.

Then after taking your code around the block a few times, you take something someone else coded or created and make it your own.  Most of the time it works this way, but there are times when it doesn’t and you just have to apply brute force and push that rock back up the hill.  And once you do the first time, everything starts to click (and work).

I guess the idea here is working with cloud technology is fun, and challenging but you have to keep your eyes on what you set out to build initially and not get bogged down in why something doesn’t work.

If if  doesn’t work, start from scorched earth, as in, throw away *all of your code you just wrote* (hard to do sometimes) and start all over.  I did yesterday and tossed about 1,000 lines of source code – and worked around a problem in about 15 minutes I’d been dealing with for a while.

Of course there were other (positive) external forces that helped me get beyond the block I was experiencing, but scorched earth was the right, first, step to take.

And as it worked out, my piece of the cloud started raining on the scorched earth and once all of the smoldering finished, I had something really nice to work with and continue working with.

HTH – oFc

A New Year

Happy New Year

Well, I made another lap around this life and tacked on another year to this life, grateful and blessed I made this far.  So on my birthday before I start another lap, I stop (full stop, no distractions) and think.  What went well, what didn’t, where did I want to go and didn’t get to, and where am I heading based on the trajectory I’m on right now.

I write my goals on something the size of a business card and tuck inside my wallet, and revisit it when I’ve got downtime – actually airports coupled with a good pair of ear buds are good for this exercise.  I work in a world where 5-year plans are popular but there’s too much that can happen in 5-years, but I get the idea of putting things in place for a 5-year goal.  My target is just one year, not five.

I set about putting a few reminders around my home and work to remind me about these goals I’ve chosen to apply.  Now most of us have some type of performance plan our jobs ask us to incorporate but that’s not the type of goals I’m talking about.  Here’s a short list of a few candidates I consider each year: good listener, enthusiastic, passionate, visionary, role model, integrity, organized, knowledgeable, credible, empowering, patient, understanding.

You can incorporate many of these into other things you do day-to-day, and they can add that extra challenge to those things you’re working on to hone yourself through the next year.  And out of that list, pick no more than four or five.  You’ve got to take these goals and apply them in small portions, think baby-steps.

Just be aware of what you want to enhance about yourself and tweak yourself as needed, or when you see one of them slip.  Those are the moments you need to catch yourself and say, “hey, I’m working on that…  why didn’t I handle that opportunity better?”  Don’t shame yourself, just coach yourself, make a mental note and move on.  Don’t get hung up on it, move on.  It’s ok, we’re still learning about ourselves, well, I can’t speak for all of the readers of this blog but I certainly am always looking for room to improve.

Happy New Year!

oFc