Technical Evaporation…

This is the process of getting some data out of the local environment and up to the cloud – actually this is what evaporation is – the moisture has to come from somewhere whether its my backyard or the Gulf of Mexico and get up into the atmosphere and form the cloud to make the rain, right?

Like I mentioned, in the previous post I have been using mock data to get some of the view plumbing working so now I need to actually being taking things off my Azure Service Bus queue and putting them somewhere else.  I didn’t think about this b/c I just wanted to let the app’s development process lead me to the next feasible (and cheapest) choice for storage and persistence.  And the thought in the back of my mind is something that Ayenda Rahein said in a keynote a few years ago… “we should be optimizing for persisting data, not retrieving it.”  Yeah, pretty big statement, but it’s true.

So, the simplest thing I could do would be to persist everything into Azure table or blob storage until there’s some requirement for reporting then maybe a database can come into play.  I have to pay extra storage and compute costs on my Azure account to host a database and I don’t really need a full blown database instance right now.  I can just figure out what I need by stuffing things into something flatter and cheaper.  But, if I code this right, I should be able to move it into an instance if something triggers that need.

Moving on.

I settled on blob storage for my logging and table storage for my application data.  I built my code from this post which had some quick and dirty examples about accessing Azure’s table storage.  It turned out nice so hopefully it can be extended for other table storage persistence needs in the future.  Not as generic as I would have liked it, but it works for now.

Now, the app is throwing something somewhere b/c the local worker role is puking and coughing pretty hard right now.  So, back to my earlier post from yesterday, even though my tests are passing I need something to catch this junk so I look at it.  This will allow me to wire up the blob storage client – oh, there’s the PowerShell stuff to get to as well.  Should be a full(er) day today.

HTH / 0Fc

What Happens When It Stops Raining?

sunnyday

When we notice our cloud has stopped raining, it’s time to take a look under the hood to see what happened?  Or, is there a better place to look before we raise the hood?  A few questions to ask:

1) Was it something I did?

2) Was it something that happened inside one of the Azure instances?

3) Did the application run out of work?

4) Where can I look to see what was going on when it stopped?

Only you can answer the first question.  If all of your tests aren’t, or weren’t passing, and promoted something to a production instance you might be able to answer this fairly easily.

The second question assumes you’ve can get to your management portal and look at the analytics surfaced by Azure.  There might have been, or might be, a problem with one or more of your instances restarting.  I’ve never seen either of my instances stay down after a restart unless there was an unhandled exception getting tossed around.  Usually I find these problems in the local dev fabric before I promote.  Sometimes I don’t though, so on a few occasions even though my tests were passing I had missed some critical piece of configuration that my local configuration had, and the cloud config was missing.  I call this PIBKAC – problem is between keyboard and chair.  Usually the analytics are enough to tell you if there were problems.  And from there you can fix configuration if needed, or restart your instances or other Azure feature you’ve got tied to the application.

The third question is kind of a sunny day scenario where the solution is going what its supposed to in a very performant way.  However, sometimes ports can get ignored b/c of a configuration issue like I mentioned prior as one example.  If you’ve been storing your own health monitoring points you can probably tell if your application has stopped listening for new requests, or simply just can’t process anything.

The fourth question talks about having something that’s looking around the instance(s) and capturing some of your system health points: how many messages am I receiving and trying to process; how quickly am I processing the incoming messages; are there any logs that can tell me what was going on when it stopped raining.

I’ve been using Enterprise Library from the PnP team for >6 years and I still love the amount of heavy lifting it does for me.  The wire-ups are usually easy and straightforward and the support behind each libary drop is constant and focused.  Recently Enterprise Libary 6 dropped with a bit of overhauling to target 4.5 among other things, and here’s a blog post by Soma that discusses a few at a high-level.

I’ve used the Data and Logging Application Blocks, as well as Unity successfully.  I had recently started wiring my solution to use the Azure Diagnostics listener to capture some of the diagnostic events, particularly instance restarts from configuration changes.  Now, I think/hope I can use the logging application block to wire all of my logging events and push them to something simple like blob or table storage.

I’ve never like a UI that I have to open up and look through, it just makes my eyes tired and its annoying – I’d like to have something a little more easier to lookup fatal and critical logs first then go from there.  PowerShell (PS) looks cool and fitting for something like this, and I can probably do something quick and dirty from my desktop to pull down critical, fatal, or warning logs but I’m not a PS junkie.  But it would make for an interesting exercise to get some PS on me.  Oh, on a side not I picked up this book to (re)start my PS journey and so far it’s been worth the price I paid.  Some of the EntLib docs mentioned pushing data to Azure storage so I may just start there to see if this can work.

Here’s the doc and code downloads if you want to take a look around.

HTH /oFc

Squeezing The Rain Out Of A Cloud

mostly.cloudy.rain.xlarge.mod

This week’s weather has been overcast and cloudy, and this morning the forecast looks like the image above (according to an online weather source).  So I thought I’d share a few things I worked through yesterday and this morning to get some rain out my cloud.

Here’s what the solution is using, so far:

MVC 4 Internet project – and all the scaffolding bits it gives us. Nuget packages only include Unity.Mvc4 & Microsoft.ServiceBus right now.  I’ve got some ideas for logging that’ll use some of the PnP Enterprise Library goodness to wire-up some business and system logging for this solution, as well as tap into Wasabi for scaling & throttling ideas I have for the solution – more on the last two later on though.

The solution is pretty simple, and that’s how I’d like to keep it for now.  But I needed to start pulling some mock data into grids to start with, and historically being a server-side coder I started out trying to unload my collections with some of the HTML helpers Razor exposes to us.  But it really just didn’t feel right and I found myself trying to hack something together to get some data into the view.  The night before I was having a conversation with John Papa around UI/UX content enablers.  He makes all of this stuff look so easy, so I thought, sure I can do this too so I set out to jam in a JQuery plug-in to spin through my data.

I settled on jqGrid with only a few minutes of looking around the jQuery home page.  So not being the client-side junkie most folks are these days I started Binging for code samples.  After a few hours (seriously, I’m not that smart and somethings take longer than it should) I found a post on Haack’s site that talked about what conventions need to match and which ones don’t.  Oh, and after a minutes I fixed the signature on my controller action and the code started lighting up.  Now it’s only formatting I need for a few views which I won’t burn too many cycles on.

Using Phil Haack’s example, I had jqGrid running in a few minutes.  But I will say this – I did learn alot about the Html Helpers ASP offers, they are powerful and very handy.

The side of effect of this choice was larger than I thought though and required a bit of refactoring.  The view receiving this data was strongly-typed with  IEnumerable and now that the data was coming from a jqGrid call to an action that returned a JSON payload, I didn’t need that.  The repository method that was serving the data to the controller looked funny now.  I needed to scope the method to just return the requestor’s data, not all of the data.  I may still split this interface up because of ISP, but I’ll keep my eyes open for code smells just in case.

So, there’s a bit of refactoring going on today before I hookup the Azure persistence piece which is next.  I haven’t quite figured that out yet, but soon.  The easy part about this is I can still target my Azure Service Bus queues, and tap into the local storage emulator on my local box until I figure out data and or document shapes for the storage thingee.

Here’s a gist with the view and controller source.

HTH /oFc

Living Inside a Cloud Should Be Easy, Right?

crescent-wrench

It’s been a few months since I dropped the Azure SDK on my desktop and the tooling and changed considerably to say the least.  The portal changed a bit as well, but once you get used to it, it just works for you, and unlike before you can see everything that’s going on “up there” at a glance.

However, back in the IDE, particularly inside the code there are more pieces that are supposed to bolt up to your Azure code.  And if you’re using an MVC 4 web role you can push in a NuGet package called Unity.Mvc4 with comes with the this handy little bootstrapper you can use to load your Unity container using the UnityConfig class that runs next to the bundler and routing configs in the App_Start folder.

This was one thing that I didn’t realize was new to the MVC 4 scaffolding.   These config classes help keep things we’ve piled into the Global.asax for a long time.  And the UnityConfig class follows suit nicely.

The idea with the bootstrapper is to help keep the type mappings contained, but loading when the app domain spins up each time.  All of the other pieces appear to act the same, i.e. life-time management, aliasing, and child containers.

The last thing I’ll mention about things fitting together is when I started this solution months ago, I was using a previous version and the upgrade wizard didn’t fire-up so I didn’t get a bump on my web role “and that’s when the fight started”.

If you’re trying to preserve your old solution and you’re trying to get it to act like an MVC 4 template, don’t.  If you don’t get the version bump from the IDE, stop.  Create (or add) a proper MVC 4 project from the project template dialog and go from there.  Copy your code to the new one, fix up the usings and references and keep going.

While I was doing this refactoring and sorting out my existing unit tests the code started to thin out and I realized that the MVC 4 bits could do what I was making the older MVC project do.  It just took a bit of frustration and brute force to recognize this and keep coding.

I had the unique pleasure of deleting a lot of code, and still have everything work, well.  Just had to sync with the tooling and the way things are supposed to fit together now.  Same tools, just a different approach sometimes when the bits are out in front of the IDE.  Not a bad thing, just different, and better.

*** Update ***

So I didn’t need the UnityConfig class anyway.  The NuGet step actually plugged the bootstrapper into the root of the website and exposed a static RegisterTypes(IUnityContainer container) method that handles the mappings.  I usually don’t wrap my type registrations in code, but rather in the configuration file so I can easily add types on the fly.  The bootstrapper exposes a static method that handles returning the container.  Here’s a code snippet with one registration added.

Bootstrapper

 

 

 

.

 

HTH /oFC

Clouds Need to Make Rain, right?

Clouds Need to Make Rain, right?So I’ve been working on this cloud stuff off and on for a few months now.  And while the cloud vendors try to make it easy to work with cloud stuff, things aren’t always intuitive unless you clear your mind and don’t try to do what you remember, but actually how you’re being told they need to work.

Then after taking your code around the block a few times, you take something someone else coded or created and make it your own.  Most of the time it works this way, but there are times when it doesn’t and you just have to apply brute force and push that rock back up the hill.  And once you do the first time, everything starts to click (and work).

I guess the idea here is working with cloud technology is fun, and challenging but you have to keep your eyes on what you set out to build initially and not get bogged down in why something doesn’t work.

If if  doesn’t work, start from scorched earth, as in, throw away *all of your code you just wrote* (hard to do sometimes) and start all over.  I did yesterday and tossed about 1,000 lines of source code – and worked around a problem in about 15 minutes I’d been dealing with for a while.

Of course there were other (positive) external forces that helped me get beyond the block I was experiencing, but scorched earth was the right, first, step to take.

And as it worked out, my piece of the cloud started raining on the scorched earth and once all of the smoldering finished, I had something really nice to work with and continue working with.

HTH – oFc

Agile 2012

At the risk of writing a giant post that’s hard to get through, I’m going to outline a few of the choice spots in each of the talks I attended and a few connections I made over the course of last week.

The Gaylord Texan:  Quite the resort, you feel like you’re on an island b/c everything is enclosed under a glass roof.  It’s really a nice resort with many amenities some more notable ones were the 4K sq. ft. health center that was open from 5:00-23:00 each night.  Most places have one, but this one was huge and had a wide variety of gear, treadmills, etc. to fit any workout into.  Most of the guest facing staff wore smiles most of the time, but the convention folks for whatever reason didn’t.  Think of an ant farm, that’s how we spread out over the convention space each day for 9 hours, we were everywhere and they still manage to feed us, and see to our needs over the week.  I’d go back on my own dime some time if the opportunity presented itself, a very comfortable place to be sure.

Agile 2012:: The Conference

I was excited to have the opportunity to attend my second Agile conference, the first in 2010 at home here in Orlando.  And this one didn’t disappoint in any way, this community is very much alive and well, and kicking!  Every session I attended I was either paired with another person for the workshop portions, or a member of five or more session attendees to solve a problem.  I really enjoy this aspect of the community and conference b/c it reinforces the notion of working together immediately.  In our case it was five or so minutes after we all sat down at a table and introduced ourselves.  All of the folks I interacted with was easy to talk to and had great questions about what was going on where I worked and this reciprocated over the course of the week in all of my sessions.  The conversations and connections with other members in this community was/is stellar, everyone is willing to help solve problems in different contexts, you just have to take the time to sit down and let it happen.

Software-Craftsmanship:: Simple Design Applied : Removing Duplication and Naming

Removing Duplication:: We looked through some really “troubling” code the speaker had encountered and had to realize what its intent was.  After no less than three minutes of trailing back and forth through the code (in pairs) the two of us realized the code was trying to build a calendar for a client-side drop-down, and this example was server-side code written in Java so it was easy for me to follow along after we figured out what the intent was.  The point the speaker made was about code commonality and variability.  Most of the common (duplicate) code in the code sample could be refactored into smaller reusable chunks, but first you have to detect the commonality in the code.  Next, we identified the variability in the code.  Some of the variability was introduced b/c of the types of arguments being passed to the methods; some of it b/c of a group of if/else code echoing the arrow-head anti-pattern making the code harder to change b/c of variability.

Naming:: Stroop Effect, as in Blue Red Black.  The test was to understand how quickly you could understand what color the text was referring to, not the color of the text itself. This translated into what should we be naming the different objects and members in those objects as we go along.  The rule is to name for less cryptic names, and more meaningful names.  I’m probably for too liberal with character naming but I did realize there’s a happy medium in the length and meaning of the names we choose for things.  A few of the general guidelines: the name is

1) pronounceable

2) avoids encodings and member prefixes

3) suggest why it exists

4) suggests how it should be used

5) what it does

6) easily searchable in the code base

So the punchline is when you’re crafting code think about the names you’re going to apply.  The names of things don’t have to be elegant, just appropriate.  And for variability, check-in on the variability if you see more coming out of your design, it just needs to be easy to spot.  Make the commonality easy to detect as well – for the consumer and the developer who has to look at your code 12 months from now.  The refactored code was much easier to read, understand, and potentially maintain.  The side effect was there were more methods, not a huge reduction in the amount of code, but the class’ intent was more understood with simpler design applied.

Collaboration & Communication:: Improving Collaboration and Communication through Improvisation

The notion here was to learn how to listen, as one Greek teacher stated, “we have two ears and one mouth”, so we should be listening twice as much as we are speaking.  Here’s a few of the games we all participated in: First Letter, Last Letter; Alliterate Adjective; Three Things in Common; Answer Man; Quick Draw; and we ended the session with “Group Juggle”.  The few that stuck out was Answer Man, Three Things in Common, and Group Juggle.

Answer Man went like this.  Three people at a time from the session would stand in front of the room and be “Answer Man”, someone you could ask any question.  The trick was the three people had to construct the answer in sentence form one-word-at-a-time.  The other two people helping to construct the sentence had no idea what word the third was going to say.  As the sentence continued to grow more context came out and then helped them finish the sentence.  Some questions were never answered though, the words would build a trail of words that couldn’t find an ending.  The point was hear was was being said, not what you wanted to hear the next person say.  It’s not as easy as it sounds, and proved a great point.

Three Things In Common went like this.  It was a basic interview to find three things you and the other person you were paired with had in common.  I was paired with someone from the midwest and another from Germany – this took time, longer than I expected.  And the three things could be obvious, that had to reflect something going on outside the conference closer to our real lives, and closer to home.   The notion here is to be patient until you get the information you both can agree on, not something that’s not quite fact.

Group Juggle went like this.  The speaker split the room into two groups of 30 or so people.  We went out into the hallway and formed to circles.  Each circle was going to toss a rubber ball to a random person across from them until everyone had caught and thrown the ball once.  We started with one and ended up with four going at once.  The fifth ball we were all juggling had to go in reverse while the other four were getting tossed in the regular order.  Many of us dropped at least one, or became confused because we took our eye off the ball being thrown at us b/c we were watching how the rest of the team was doing.  The rubber balls represented tasks we juggle on a team, and handing them off to one another.  The point was that some folks would wait until the catcher was ready, and some didn’t.  Other folks would verbally ask if they were ready for the toss, others were on automatic and just threw, caught, and reset to catch and throw the next thing coming at them.  Again, we’re all working with people  we met an hour earlier and don’t understand how they “catch and throw” tasks.  Its probably a good idea to iron this out when you, or a new member, joins your team.

Just before we wrapped with Group Juggle, we had a discussion about the “Five Dysfunctions of a Team”, here’s the graphic

5 Dysfunctions of a Team

Layers aren’t isolated, but part of another problem a layer or two above it.  Looking at other bloggers take on this graphic my mileage is going to vary but I’ll try to convey the key points the speaker was conveying.

If the top three are failing, there may be “no-conflict”, meaning everyone is agreeing with everything w/o sharing opinions.  Conflict can have a few different meanings in this context, “no-conflict” such as eye-rolling and silent disagreement as examples.  Or conflict can be rude, nasty, put-down behaviors.  By simply agreeing to disagree or working through the conflict you can get to the next layer commitment and start working there.  Working up through the next two layers once you get to, and succeed at the “Results” layer, all egos have left the room.  I’ll probably read Pencione’s book at some point but it will probably be before I begin working on a larger Agile team with a common goal, most of the what the speaker shared was enough to get the main ideas across about working through the layers and realizing the interpersonal pieces that aren’t working well across the team.

Tuesday’s Keynote:: Scaling up Excellence. Mindsets, Decisions, and Principles: Bob Sutton

Wow, this guy was amazing and the first time I heard him speak.  My leader picked him out of the conference as one to listen to, and so I listened and wrote as fast as I could, you can only write so fast to capture all of the nuggets flying off the stage, hopefully they taped this and will share it post-conference sometime.  Here’s a few points from his session:

Scaling from few to many:

  • At FaceBook,
    • not just sharing the mindset, but practicing and living it, not by what we tell them to do; it’s clearly apparent what it is and how they need to perform
    • make one little change, one after another
    • Six-week boot camp for new members
      • Do chores for other groups working inside 12-13 short projects
      • the job (where they fit in) is determined at the end of the six weeks
      • touch the metal, understand the code base, move fast, and break things
  • At Starbucks
    • How to water down the Starbucks experience…
      • Remove the tangible experiences from the stores, specifically the smell of and grinding coffee in front of the customers, something customers remembered most but that aspect was removed later to make room for the bagged version of the brand
  • More vs. Better
    • Voltage loss is induced sometimes when an organization scales out, some things get lost in translation as the scaling occurs and the same results aren’t being achieved at scale because the localized plan to achieve the same results at scale are communicated, received, or interpreted correctly.
    • Voltage loss may not be bad, 1/2 as much better may be twice better than the way it was
    • Catholicism vs. Buddhism (replication vs. localization)
      • Replication Trap: Home Depot opened 12 stores in China and introduced their Do-It-Yourself culture to a Do-It-For-Me culture – they’ve closed 7 of the 12 stores so far
  • Link Hot Causes to Cool Solutions (might be my favorite part of the talk)
    • When folks are really riled up about something, they’re probably thinking very, very hard about the problem
    • Offer them a cool solution that gives them somewhere to channel their energy.
    • The Watermelon Offensive: a large university wanted all undergrad students to wear bicycle helmets around campus but none were interested while almost all graduate students did.  The safety group decided to host the watermelon offensive.  The group had bikes laying all over a field with what appeared to be riders attached to the bikes with a smashed watermelon near where the head would be.  At that moment they offered the helmet at $7, less than half of what one would have originally cost them, and it worked for most that attended.

Software Craftsmanship::Deliberate Practice – becoming a better programmer: Alex Aitken

The rule is to become an expert you need 10K hours (five years of full time work) of deliberate practice, that’s a long time.  The thought that hit me was if I’ve spent 10K hours on anything besides breathing, maybe so, I hope so anyway.  Deliberate practice is how to pull this closer to being an expert at any level, or just moving yourself forward.

In this session we did a FizzBuzz Randori where we used test driven development to solve FizzBuzz in two minute pairing sessions.  Afterwards we did a mini-retrospective to see what we learned through the course of the randori…

  1. taking smaller coding steps through the process produced more dialog for each pair
  2. learn to (re)name code members where and when appropriate
  3. try something, fail, throw it away

This was just one randori, YMMV where you live and code with your friends/peers, but the observations are for-real.  Things you can take away with you to prop your code better than you did the day before.

The speaker also discussed Coding Calisthenics that he uses, he proposed not trying to use all of them, but trying to work them into his code incrementally helped him craft his code a different way some times and think of other ways to solve a problem.  Here are the nine rules he uses for exercise:

  1. Use only one level of indentation per method
  2. Don’t use the else keyword
  3. Wrap all primitives and strings
  4. Use only one dot per line
  5. Don’t abbreviate
  6. Keep all entities small
  7. Don’t use any classes with more than two instance variables
  8. Use first-class collections
  9. Don’t use any getters/setters/properties

This was probably my favorite SC session because it was more real-life to what I’ve experienced in some of the dojos I’ve participated in here at home.

I’ll wrap this post here, there’s a lot more to recall and remember but I hope you’ve got the idea that for the heavy price tag the conference carries, it totally changes the way you think about projects, collaboration, succeeding, code-crafting, and connecting with the folks you just met and may never meet again.  It’s all good.

Windows 8 Accelerator Labs

Windows 8

Windows 8

This week the Microsoft office in Tampa hosted a three-day Windows 8 accelerator lab for anyone wanting to port or create an application to the latest version of Windows Phone or Windows 8 Metro.  I think most folks stuck close to the C# / XAML flavors for their applications and a lot of applications were updated, ported, or created by the group.

I was able to attend for a day and a half and worked on creating a Windows 8 version of an application I’ve had on the shelf for a while.  The app I was converting was a Mango flavor but instead of creating an updated version I decided to create a version that would run on Windows 8 instead.  I had a lot of challenges at first but leaning on the documentation help quite a bit.

The biggest things were moving items from Windows Phone isolated storage to Windows Storage (that’s the namespace) on Windows8, and then there was navigation.  Some of the built in templates handled navigation out of the box without disrupting the workflow I had in my in my original Windows Phone application.  I focused on those two because they are tied together, it’s a matter of knowing (or better, trying to understand) when the application moves from one view to another.  Once it does, the application needs to understand when, and if, it needs to save what’s been entered.  The other opportunity here is understanding where to save what was added or changed.  This was the majority of my challenges for the work I completed over the last day and half of the labs and it was very educational to be sure.

Hats off to Jim Blizzard for being the MC and host of the event, he did a great job (as did others but I didn’t get their names) walking around the room and answering questions and handing out advice for the challenges folks were having.  At different times of the day folks gave demos of the work they had completed or started, and for that they offered them a new Windows Phone, not a bad deal.  Many folks trying their hands at using XAML, even one Android developer that created a Windows Phone app on the last day that she demoed to the group.  One person hadn’t worked with XAML before and created a roving repairman application by just using the developer documentation and the application templates that are available.  They Windows 8 templates are few in number, but they are definitely just enough code to get you started.  There a numerous samples that can help with starting out on the application type you want to build if you just want to get something quick and dirty up and running.

The review process is a bit more “picky” in that you can vet your own application before you submit it to the marketplace so you can fix any of the obvious problems the review process might notice, but that’s ok, it’ll save the time you tap your foot waiting for the acceptance email and help you focus on the problems your  solution might have.  I looks like the static analyzers (FxCop / StyleCop) built into Visual Studio has to help you write better code, only you get a quick pass or fail notification for how your application is built.

I really enjoyed this event by taking the time to dive into Windows 8, but as I was driving away from the MS office I could help but think that Windows 8 is the new Silverlight target for applications, but its deeper than just spinning up a C# / XAML application like we did for Silverlight that runs in a browser, this one has a much larger platform to run on.   The project templates target much more than just a Silverlight solution, there’s the HTML5 flavor that runs like a website, so if you’re a web developer and you don’t mind traipsing through some Windows namespaces for your client-side code, you might like this next version of Visual Studio (2012 RC dropped today) for your development desires.

HTH,

ofc

Tried Something New

Wooden Apple

I decided to try out two new Apple devices a few months ago.  The main points of this was not to upset my local group of cronies who are somewhat “all things PC and not fruit”, but to simply upgrade my current PC to newer hardware.  I’d heard of other folks trying this and repaving the initial Lion or Snow Leopard image with Windows7, but I had a few things in mind: I wanted to shift the way I used a laptop, specifically one with fewer buttons and one with (IMHO) superior graphics and display; I still wanted to keep a Windows7 image on the machine; I wanted my music and photography to follow me around instead of putting some pics over here, and then some over there.

Sure, there’s a slew of gestures and key combinations to learn but the last time I had to learn a keyboard I was in typing class in high school, so my brain is enjoying the attention and exercise at the moment.

Two good friends (also coders) have been using Apple machines (and phones) for a while, who I used as resources to ask specific questions about the configuration a.k.a. features to add at buy time, and how to run Windows7 as well since my day job still requires some coding tools that only run on Windows.  They’re smart guys so I trust them and they were right.  It has been a blast so far, and moving back and forth from the MBP and the HP hasn’t been terrible but I do find myself mashing on the track pad on my HP laptop and obviously nothing happens.

A good friend of mine told me this one time, “if you want to work, use a PC; if you want to play, use a Mac”.  He was right. Totally.  Now that I used my MBP for both work and play things seem more normal, not sure what word to use there, but maybe you get it.  At any rate its been a great journey so far, the hardware is awesome, the graphics are clean and crisp, and there’s no shortage of help when I’m trying to figure things out.  The thing that probably sticks out in my mind the most is the amount of time I don’t wait for the laptop to startup and shutdown.  I’ve probably saved about 12 days of my life since December not waiting for things to start and stop.  I was glad to wait in the old days, now I’m a bit less patient, and I like it better when things are more snappy.

I also recently purchased an iPad.  The main driver for this was to have face-time with daughters and their kids since there’s a lot of distance between FL and OH.  The face-time so far has been awesome, and it’s great connecting randomly with my girls.  I also discovered so many apps to help organize me.  Most notably are Remember the Milk, and FlipBook.  Other apps that stream things are more of a distraction during go-time so I won’t list those, but I will say that there’s probably no reason to continue buying music when some many applications can stream it.

FlipBook really does a fine job of collating all of the blogs I read (7 total); plus it connects all the other social stuff too which is nice but not necessary – and its free.  All of my magazine subscriptions have companion applications as well, this means I don’t have to stop reading, oh, and all of the eBooks and PDFs I stuff into DropBox are available too and they read like I’m using a giant Kindle; I can’t read outside but if I’m outside I probably won’t have a book in my hand anyway.  I still like books.  I have a lot of them, so I don’t see myself replacing that experience with an iPad.

All of the other usual stuff is basically the same, the PC could do the same as the iPad or MBP; there are some nicer trade-offs though but I’m really enjoying this change so far.  A lot!

Closing Feedback Loops with Sonar

Last year my shop began using an application called Sonar.   To over-simplify what Sonar does is to simply say it aggregates all of your static analysis and test results into one view – very, very, easily.

From what I can see most of the code is written Java, and initially it was written for Java – an assumption on my part.  But the community has also provided plug-ins for .NET and other platforms.

I installed this on a Windows Server 2008 R2 x64 VM with modest memory and disk.  The two main pre-requisites were VS2010, I use the Ultimate/Awesome sku, and the Java’s latest version.

I setup Sonar locally on the VM, meaning everything is running locally before I tied into the production version of sonar’s database that the rest of shop is using.

In a nutshell here’s how it works.

1) Your build process runs on a build box or your local box and places the compiled binaries into a folder (bin\debug works fine too) your sonar server can see.

2) You call (manually or automated-ly) the sonar profiler from your root src directory.  The profiler has a configuration file in the root of your source directory where the .sln file lives.  The profiler uses this information to understand where your test directories are *if* they are not part of solution, or recognized in the .sln file, however it will use the .sln file for most of its pointers.

3) Once it knows what it can/can’t do it will execute static analysis (style/fx cops) against your code and run any test.

4) The profile collects the results in xml files and applies the gathered analysis to the database Sonar is using.

5) You open your sonar dashboard (http://localhost:9000/sonar) on the server hosting Sonar and your dashboard displays the analysis it gathered.

The plugins are the bits that do all of the work on the profiler’s behalf, specifically Gallio, but before I get ahead of myself here’s the plugin round-up…

c.sharp.plugin.roundup

There are more plugins and analytics, but right now we’re just going to get started with this lot of plugins.  Gallio is the test runner, so if you are writing tests in any of familiar testing frameworks you’re covered.  I’m using MS-Test for mine, but I was really thinking of using xUnit or NUnit instead to relieve a few of the dependencies.

Here are some of the gotchas I ran into being a Java semi-noob – it’s been a while since I tried to get Java running on a Windows box, so if this is you just follow the docs closely and Bing if you get stuck.

0) Install Visual Studio 2008 or 2010 SP1 first.

1) After you install Java, set the JAVA_HOME var for the server

2) After you install Sonar, that directory effectively becomes your “sonar home”, and lots of stuff needs to know that as well.  So, think about how your build/ci executable directories are arranged before you install sonar, and the sonar profiler

3) After you install the Simple Java Runner aka, the “sonar runner” in the docs, set the SONAR_RUNNER_HOME var for the server.

4) Now, start running the rest of the installs based on this.

Sonar.Required.Tools

I went with the defaults and used the embedded apps, and only installed PartCover and Gallio. The Sonar profiler will look for the directories where FxCop, StyleCop, and your coverage engine is.  This is configurable inside the Sonar web site where you can do further configuration tweaks if you need to move stuff around.

5) This is basically a 32-bit process and running in an x64 environment takes a few extra steps.  The docs cover this, but referring back to item #2, for me I had to install Gallio and PartCover in a different directory (per the docs), outside of “c:\program files” which is the default.  At this point I decided to move everything into directory structure that looked like this:

D:\Build\Gallio; D:\Build\PartCover; D:\Build\Sonar; D:\Build\SonarRunnerHome;

I also needed to run corflags (inside a VS2010 command prompt) on Gallio.Echo, PartCover, and PartCover.Browser executables to force them to run as a 32-bit process, per the documentation.

I think that about covers the gotchas I ran into.  The last thing I still need to sort out is defining the “program directory” dynamically for the sonar runner.  Sonar runner always assumes you’re in the root of the source tree and wants to build a “.sonar” directory if one doesn’t already existing.   To get my local version running, I’ve hard-coded the source directory I was trying to profile for now.

Hope some of this helped.

/ofc

Wrapping My Head Around Windows Azure–Deploying

I mentioned in one of my first posts that deployments made zero sense to me.  I found a few blog posts today where a few folks were just disappointed with the level of documentation.  And things are a bit cumbersome, clunky, or just not intuitive for folks trying to make “it’ happen – roles, storages, and certs.

I’m going to take a whack at trying to make some sense.  I’ve got many words and pictures to share because I hate when stuff is hard, especially when I’m just trying to deploy it.  I felt this way working with ClickOnce.  You wouldn’t often redeploy simple or (mostly) static solutions, but if you do using the publish utilities it can get a bit frustrating.

If you look at my first post, it tells you which Azure bits I’m working with, make sure you’re up to date b/c stuff will continue to change until they get it right or make it better – or both would be nice.  Patience.

So I’m assuming you have something that is tied to an Azure project that you want to publish, great! Let’s go!

From the top

Just like a web project we right-click on our Azure project and pick “Publish…” from the context menu.  We got that.

Which deployment path should you choose?  If you have not done the following already:

  • Setup Remote Desktop Connections with Windows Azure
  • Created and installed certificates on your Windows Azure subscription

then choose the “Create Service Package Only” option.  We’ll start there, I did and didn’t get any new scars from it.

If it looks like this, click “OK”, we’ll talk about the other stuff later on, promise.

Azure.Deploy.PkgOnly.00

As soon as you click “OK”, you’ll see your solution start to build, that’s normal.  It’s also going to build a deployment package and the configuration file for you build, and will create them in the file system under your project.  They don’t get added to source control unless you wand to add them.  I’m using GIT so the whole file system is being watched so mine were checked in.  The files are added to a publish folder for the build type (Debug, Release) you have selected.  So my “Debug” deployment files went here.

Azure.Deploy.PkgOnly.02

If you had a release build, it would show up in “bin\Release\Publish” instead.   Those two files in the folder are what we’ll use when we are in the Azure admin site to deploy our app.  Follow me over here, and stand right there and watch this.

My deployment has two roles and talks to one queue and one table, simple.  So in the dev fabric on our local machines the magic is just spraying everywhere from the Azure colored fire hose and everything works.  You probably stepped through the connection string in your configuration file and found “UseDevelopmentStorage=true” for your data connection haven’t you?  Well, now we’re going to be talking to live queues and tables and that stuff won’t work any longer.  So (as I figured out yesterday) we need to tell our deployment where our storage lives.  First we need to create it if we haven’t already, but if your HelloWorld app doesn’t use anything but a website, you won’t need to do this now.  However, I would encourage you to following along anyway.

To the cloud!  I hate that line…

BrowseToPortalI found this yesterday, and it was quite handy indeed.  From the cloud project in your solution, right-click and choose browse to portal.  

This gets you where you need to go.  When you get, log in and go to your Azure account.

When the portal opens look in the lower left hand corner and find the Hosted Service, Storage Accounts & CDN button.  It looks like this, click it:

image

Here’s my subscription with no hosted services or storage accounts. 

Subscription.Service.Storage

Let’s create a storage account; a storage account is something that groups together your blob data, table data, and queue messages.  We can delete this when we’re done if we don’t need it, or want to leave it online. 

The main thing to get here is that it’s tied to your subscription and you can have as many distinct ones as you need.  And yes they do cost money when you start dropping data into them – YMMV.  And the URLs they use for communication resembled based on the name you give the storage account.  So if called my storage account “junkinthetrunk”, the endpoints my applications use to get stuff would look like these:

https://junkinthetrunk.blob.core.windows.net

https://junkinthetrunk.queue.core.windows.net

https://junkinthetrunk.table.core.windows.net

Before we go on let’s talk about affinity groups.  Affinity groups tell Azure what part of the country you want to store your data in.  If you born in the midwest, you might choose that region, but if your app is serving the South Eastern US, you might want to rethink that choice. 

There’s only a few choices and not in all cases but in most, you’ll want to pick a data center that’s closer to your business especially if you are running a hybrid solution.  For example, a native ASP.NET hosted by an ISP and uses Windows Azure for some type of storage.

Click “Affinity Groups”, then click on your subscription record in the grid, then click the cool looking “New Affinity Group” button – in that order.  You’ll get one of the dialogs pictured below, and as far as I know they are free.  Fill in your choice of group name, pick a region or anywhere if you like, and click “OK”. Here’s how I named mine.

AffinityGroupName

                   AffinityGroupName.Entered

imageSo, let’s build the storage account, we’ll use the affinity group as we do that.  Click on Storage Accounts (the green thing above) then click on this cool file cabinet button.  This will cause a dialog box to show up that’s going to ask you a few more questions about what and where you want to store you “file cabinet”.

Let’s fill in some more blanks, k?

CreatingStorageAccounts

Using the affinity group we created earlier, here’s what it looks like. 

I only have one subscription so it was filled in for me.

I only enter the unique part of the URL – this will be the name of the storage account, and it shows up in the management portal and in your config.  So make it a good, clean name and not something like “thisappsucks”.

A variation I came up with was to divide the storage accounts into different stages so I could potentially promote code from the development fabric, then to staging, then finally to production.  Today, “junkinthetrunk” was already taken, so I used “dev0junkinthetrunk” instead.  Also, Windows Azure ensures it unique as you type out the name. 

If this makes sense to you use it, just remember that the more storage you have the more you pay when you use it.  My process has been to delete everything in my subscription when I reach a milestone, then just start over.  Call it deliberate practice.

I’ve clicked “OK” on the dialog above and Azure is off creating my awesome storage account.

Storage.Descriptors

If anything in documentation (yours or a vendor’s) asks for an account name, it’s probably referring to a storage account.

So let’s back up and recap a bit.

You opened the Azure Management Portal.

You created an Affinity Group.

You created a new Storage Account and applied the Affinity Group to it.

The point I want to make here is getting the cart in front of the horse; think about your storage while you’re sketching your architecture on the back of your dinner napkin.  If you don’t need it, fine you’re already ahead.  But if your app needs it, it is better to set it up ahead of time. 

You can create storage after the fact like I did once – BUT you’ll have to alter the configuration file in the management portal and (based on how busy, but you’ll wait forever for the instances to tear down and build up. 

Any changes you make to configuration will restart any instances you have running – also its probably fair to mention this is why you cannot change the Service Definition in the management portal, you cannot add roles on the fly, just the number of instances through the configuration.

Let’s go back to the deployment files and update them with the storage account information.

Azure.Deploy.PkgOnly.02

We only are going to open the ServiceConfiguraton.cscfg file and add our storage information and access key to it.

My application has one web site (role), and one worker role and each is named in the configuration file and initialized with (one)1 instance of each.  If you know you want two or three instances, change it here and they’ll all spin up when Azure tries to initialize your package. 

ServiceConfig.Deploy.00

Also, which is more important to note, each role has it’s own data connection, that’s where we’re going to plug in our junkinthetrunk storage pointers we discussed above.  Here are the before and after for the connection information.  I snipped the account key (it is called access key in the storage account descriptors) just to make it fit.  You need to enter the entire key.

Before:

<Setting name="DataConnectionString" value="UseDevelopmentStorage=true” />

After:

<Setting name="DataConnectionString"
value="DefaultEndpointsProtocol=https;
AccountName=dev0junkinthetrunk; AccountKey=2V6JrRdFnrST2PCQHA <snip>" />

Back in the Azure Management Portal click on “Home” or Hosted Services, Storage Accounts & CDN”.

Now we’ll create a new Hosted Service by clicking this really cool “New Hosted Service” button.

image

The dialog that’s presented needs more information, I fill in my blanks like this:

Create.HostedService.After

Once you click “OK”, you’ll see this modal dialog, it’s harmless.  It’s basically telling you that you will only have one instance created, change as need if you want to.  Click “Yes”.

image

I color-coded each section above to describe and compare, after, the service gets created so you can see what information gets stored where in the portal. 

Back in the management portal Azure is busy deploying our package…

image

From the color-coded box above the service name, URL prefix, deployment name, and add certificate items are described below, so I’ll skip those for now.

The value in the dark green box is the same Affinity Group we created for our storage account.  Now our roles/applications are running in the same data center as our data tables, blobs, and queues.  Building the group makes this easier to get right when we deploy.

The light green blocks are the defaults, I stuck with those for getting my stuff into the staging environment.

Package location and Configuration files were the first things we discussed; remember they were created when we chose the “Create Service Package Only” option.

Checking on the deployment again…

And now things are spinning up…

image

Back to the comparison…

Of all of the blanks we filled in, here’s where a few entries landed, compare this to the color-coded box above.

Create.HostedService.After..Comparison

The “Windows Azure Tools” certificate is something I created yesterday while I was trying to get the Remote Desktop Tools working for a deployment update.  From where I was sitting the deployment kept timing out so I gave up for the moment.

At any rate, if you want to associate a certificate (.pfx) with your deployment click the “Add Certificate” button and drill to the location where it’s at, enter your cert’s password then click “OK”.  And if you fat-finger your password you will get a chance to enter again.

image

All done!

image

To take it for a spin we just need to click on the “Development.Registration.Svc.deploy” node, and then in the right-hand pane the staging DNS name will open the web site that’s been deployed.

Deployment.DNS.Link.Opened

The highlighted URL is the one we clicked on, so know we can add some data.

image

I’ve been using the Cloud Storage Studio tool from Cerebrata Software for this and we can see the data from my “junkinthetrunk”.  The tool allows me to see my SQL Server backed development storage and Azure storage accounts.

image

 

If you’re just here for the good news, I hope this post help you in some way to flatten the Windows Azure curve.

If you want to stroll into the stuff that didn’t work for me, keep reading.

What didn’t work

Certificate Management and Installation

Once I had my initial deployment finished, I found a bug and needed to redeploy my solution.  This is when I setup my certificate and associate it with my deployment.  This sets up the trust between Azure and I. 

Inside the management portal you can choose “Management Certificates”.  This is looking for the .CER file, not a .PFX file.  So then I needed to open the cert manager snap-in and export the public version and apply it to the solution.  So now I had a cert, it’s the same one I used today.

My point is that you need to create the cert before you get into a deployment scenario.  Create a cert, but create it in the way step five of this blog post describes:

http://msdn.microsoft.com/en-us/library/gg443832.aspx

This will allow the cert to be used during the initial deployment, and to be added to the deployment as well in a way that satisfies Windows Azure certificate manager.

Redeployment

VS2010.Deployment.Failure

I kept getting time outs from Windows Azure during a redeployment.  Once I figured out the cert management issues, I couldn’t get a 90 second time out threshold somewhere.  All of the blogs I found were related to pushing a big ol’ VM up to Windows Azure.  This was an app that I had already deployed six or seven times.  I’m going to try somewhere else, maybe my network or Windows Azure was just having a bad day. 

Oh, and if the certs fail for any reason or trust is an issue, the blue ring around this deployment warning turns bright red.

Documentation

There were a few complaints out there from the recent refresh that took place where not all of the documentation had been updated (folks using old docs on new bits) and the fact that things were just a little too cumbersome. 

My only response to that (and something I’ve shared with readers who have worked with me) is this.  Do you remember the 1.2 toolkit for SharePoint 2007?  And what a pain in the butt it was to use?  We are so far from that experience and I’m glad for it.  We had already gone through the 2003 version and tried to sync (in our minds) the precautions to take for WSS and MOSS before we tried to, or thought about, upgrading. 

I’m sorry but the teams that are building this stuff must have listened or are working for someone who experienced much wailing and gnashing of teeth, not to mention a lot of bitching when it came to poor tooling back then.  I’m thinking, it will only get better from here, and right now its not bad at all where I’m sitting.

Resources

I used a lot of links to get this far, and some of it just by thrashing about a bit as well.  But here are some links that helped out.  Some of the information I gathered from these posts I reposted in a different way but I still wanted to give some credit to the folks that put the first bits of guidance before me.

// UseDevelopmentStorage=false

http://simonwdixon.wordpress.com/2011/05/06/azure-usedevelopmentstoragefalse-deployment-hangs/

Channel9 – CloudCover : http://channel9.msdn.com/Shows/Cloud+Cover

Setting Up Named Auth Credentials : http://msdn.microsoft.com/en-us/library/ff683676.aspx

Basic VS2010 Deployment : http://msdn.microsoft.com/en-us/library/ff683672.aspx

*The* ASP.NET/Azure Quick Start: http://msdn.microsoft.com/en-us/library/gg651132.aspx

There’s one more gap I want to fill and that’s the code I wrote for the solution.  I only got into the data (model, messages, and entities) before, and I’d like to talk more about the infrastructure code before I go dark and get really busy building out this solution.

Again, if you read this far into the post, thanks.  I hope it helped you in some way to learn to use Windows Azure.