Monday 10 December 2012

Mercurial vs Git

Most posts about distributed version control are written by people who are biased to one particular solution. This is usually because that person has a wealth of experience in one system, and not so much experience in the other.
This post will be no different.

Prerequisite

I'm biased to windows. I know this. Mac and Linux users live in a different world to your average .NET developer, and that's always going to influence how I view these products.

Git's GUI Offering

Git offers Git for Windows. It's sub-standard, you can't see a branch map, it crashes and falls out of sync with the command window, and you can't do a fraction of the things in the GUI that you can do on the command line. This includes merging and removing branches.
I understand that the makers of Git are trying to make their product more palatable to Windows users, but in my humble opinion, if they are going to try and offer a GUI to do this, bin Git for Windows, and make better use of third-party components.

Third-party components

Mercurial actively offers TortoiseHG as a compliment, and its benefits are huge. You're able to create, merge and close branches, push, pull and contort Mercurial to your whim with a few clicks of a mouse. For Git, TortoiseGit is on its way, but it needs to be pushed harder as a package, because public ignorance of it as a product choice is high. Proof of this is the fact that there are still people trying to find ways to allow TortoiseHG to work with Git repositories!

Conclusion

I know that lovers of the command line will always prefer a black screen with coloured writing to provide them with text-based representations of their source control. I'm also very well aware of how powerful both products are in their purest CLI-based form. However, in three weeks of using a combination of the Git for Windows GUI and the command line interface (not to mention a LOT of Googling and documentation-reading!), I still couldn't do a fraction of what I learnt to do using TortoiseHG and Mercurial in the space of a week.
Productivity must not be made to suffer to be one of the cool kids.

Thursday 22 November 2012

Create a text area for a spreadsheet using NPOI

I've been doing a lot of work lately for a client that requires the exporting of large amounts of data into Excel spreadsheet format. After a fair bit of research into the more flexible .NET-based packages available, I decided on NPOI.

Most of the questions on NPOI are answerable from research, but here's one I found tricky to find. Creating a proper text box on a spreadsheet that you can add formatting to, and include massive amounts of text in.

To create a textbox, do the following.

Create a patriarch object

This is your base to create any drawing object. I created mine as a member at the top of the class so that I can re-use it class-wide as I please.

Create an anchor point

This is the area that your textbox will sit on. You need to change startingRowNum and endRowNum to be the start and end rows (height) of your textbox. The width is determined by startingColNum and endColNum.

Create the textbox, and put something in it.

You have to create a rich text string to put in there, but on the plus side, that gives you the opportunity to play with formatting and so forth.

I found this particular bit of functionality to be quite well hidden in the documentation, so if you take this any further or do anything interesting, please feel free to comment and let me know!

Thursday 8 November 2012

"Validation failed for one or more entities" == Vague!

Sometimes when you're debugging exceptions around Entity Framework, the exceptions can be rather unhelpful. I've encountered this error on many an occasion:

"Validation failed for one or more entities. See 'EntityValidationErrors' property for more details."

Thanks to this post on Stackoverflow, I now know how to get at that EntityValidationErrors property that was unusable from the quick watch window.

In my case, I wrapped the update command throwing the exception in a try / catch block, and casted it as follows:

This then let me view exactly which field was causing the problem, and I then adjusted my validation to suit!

Thursday 25 October 2012

Problems with sending email over SMTP in .NET?

I occasionally have problems sending email to my own local SMTP server, and get pretty paranoid that your recipient might actually get a test email that they're not actually supposed to get.

You know you haven't set SMTP up properly on a .NET website when you get an error like this: "Failure sending mail" or "No connection could be made because the target machine actively refused it 127.0.0.1:25"

My advice? Don't bother setting SMTP up on your local box. What's the point when you can intercept all outgoing email and view what you're going to send (and to whom) with a little application that sits in your system tray!

Enter SMTP4Dev. I'm going to sound like a JML advert, or possibly sound like I work for them (I don't), but I can't recommend this little application enough. It requires no config whatsoever provided that you're going to use port 25 for SMTP, and it's free!

That URL again: SMTP4Dev.

Friday 19 October 2012

Elmah won't work in .NET MVC 3 won't work with CustomErrors

I've been doing a lot of work on an MVC 3 application recently, and it's already got Elmah installed ready to report all sorts of errors to help you debug your application.

Except it doesn't.

When the <customerrors /> element of the web.config is set to "On", a friendly error message appears when a user experiences an issue, but Elmah doesn't capture the error. This is because ASP.NET swallows the exception, and this method in the global.asax.cs is responsible:

This line ensures that the "handle error" attribute is attached to all controllers, and the exception's lost.

I found a really nice neat solution on a StackOverflow question, but it's a long way down the page, so I'm recording it here for easier reference.

Create a class that looks like this:

And reference it in the global.asax.cs file ahead of the other attribute like this:

The various answers and a full discussion of the problem are available at this StackOverflow question, but it's quite a long way down the page. It's really worth a read though.

Wednesday 15 August 2012

Creating Excel files using NPOI - Getting a cell's co-ordinates by ICell

The greatest thing about exporting data to Excel is the ability to create formulae that will do a lot of the user's hard work for them later on.

I've been working with the .NET port of Java's POI called NPOI to allow me to create a quite complicated spreadsheet from scratch, and I've noticed that there are quite a few holes in the library that (unless I'm mistaken and the functionality exists) need to be filled.

My next few blog posts will probably be spent documenting some of this functionality, and I'm going to start with getting a cell reference by the cell object itself. For the purposes of this blog post, I'm going to refer to the co-ordinates of a cell "A1, B2, C3" as a cell reference.

I couldn't find a way in NPOI to allow me to get a cell reference from a cell. I found that I had to piece this information together myself, and I used the CellReference class to do it. Once I had the elements, I just joined them together.

This function is particularly useful to me, because while I'm iterating through a loop for a collection of data, I can grab the cell reference as soon as I create a cell, and keep a note of it to create subtotals and totals farther down the line. You may want to wrap some error trapping around this code and catch the right exceptions.

Thursday 19 July 2012

Using SignalR and MVC to create a Facebook style news feed

A thought occurred to me whilst driving home a few days ago, where I considered that there was a buzz around node.js and servers pushing content directly to clients. The connecting thought was that I had no idea how this could be accomplished using the .NET framework.

I decided that I'd be really interested to find out if it would be possible to use .NET to produce a Facebook-style news feed that automatically refreshes as soon as something is pushed to the list. Podio does a really good job of this too. As soon as a user posts an update on a story, the change is immediately pushed to all clients complete with a snazzy "knock-knock" sound effect.

There are a few ways of accomplishing this using standard AJAX. The first is long-polling, which maintains an open connection waiting for the server to send something when it's ready. The second way of doing this is using the Javascript setInterval function to poll the server and see if anything has changed every x number of seconds.

However, I wanted a Websockets-style solution that would manage connections better. I did a bit of reading around the subject, and arrived at SignalR.

The clever bit about SignalR is that it provides all the support for this sort of connection for you. It starts by trying a WebSockets connection, and if your browser (or the webserver for that matter) doesn't support it, it picks a different method of transport until it finds something that fits. For Chrome and IIS 8 it would be WebSockets, and for any Internet Explorer version, it uses Forever Frame. You can read more about that on this StackOverflow answer from one of the authors.

My plan was this. I would produce a simple MVC application with a standard create form with two fields. Name and update type (either news or event). When the MVC submission was complete, the update would appear in all connected browsers, almost like a one-way multicast chat program, but over HTTP instead of a standard socket.

I then wanted a bit of JQuery in place that would push the list down and fade my new update in at the top of the list, because the list is ordered by publish date.

I created a blank MVC project, and created a controller class with skeleton actions for index and create. My view contained references to JQuery, JQuery UI, the SignalR Javascript includes, and the standard rendering of the existing list.

To prevent any confusion of the point of the project, this example doesn't connect to a database. I create a list of existing news and event updates in global.asax.cs, and store them in the HttpContext.Application object.

I then needed to create a Hub class that my client would connect to in order to receive updates. If you want the client to fire methods on the server-side, you can create them in this hub class. In my case however, I'm just pushing content from the server to the client, and this is done later on in my controller class.

My controller class takes the name and type of update, and adds it to the list of updates in the Application state as any MVC application would do. The difference however, is just before returning the view, I call a new function called SendMessage(Update). It looks like this:


The idea of a lot of SignalR examples is to push client events to the server, which then distributes it to all clients. I didn't want client-side code muddying the waters of submission of simple data, because in the future, data may change in a different way.

For example, we may want to create a delegate method server-side that performs some functionality and then pushes updates to the clients, rather than waiting for client input. My example is broadcasting a server-side event to all connected clients. This is all happening in BroadcastUpdate using .NET 4.0 dynamic variables to call client functionality.

The last step was to implement some Javascript that would listen on a connection for a call from the server to a client-side method. When this response is sent over, we get a serialized JSON object containing our update. We then do a bit of nifty JQuery to make it fade into appearance at the top of the list.

The Javascript looks like this:


It's really best to get the source code and see the demo in action to really understand how it all fits together, so I've uploaded the project to a repository on GitHub so that you can get it running yourselves.

I understood much more about how to use SignalR in this video about SignalR.

Monday 9 July 2012

Bulk-moving Umbraco documents and updating their URLs

When moving an Umbraco document to a new parent, I had a problem where the 'link to document' property wouldn't update to reflect the new location.

Another problem I had was when I had to bulk move multiple nodes in one transaction.

After you've moved a document to a new parent, publishing the document isn't enough; you have to regenerate the XML from the parent down.

However, doing this every time using library.RefreshContent(); involves clearing the umbraco.config. Doing this whilst bulk-processing lots of documents means that the umbraco.config goes missing during transactions, and it will throw an exception.

To prevent this from happening, I introduced a config value to control whether that refresh takes place or not.

You can see what I've done here: You can read more about how to publish a document (including the answer I found) on the Umbraco wiki. When we've published the node and it's moved, the document's URL won't have updated. This is because the umbraco.config hasn't been updated from the database, so to get this to forcibly update, you have to:
  1. Stop the website application pool
  2. Delete the /app_data/umbraco.config file
  3. Start the application pool
When this is done, your document URLs should be updated.

Let me know if any of this isn't clear, and I'll do my best to clarify!

Monday 2 July 2012

JQuery.Cycle messes up on DOM ready

I had a bug recently when using JQuery.Cycle for a slideshow with large images.

The cycle method fails on page load when the images aren't cached.

This was because I was using:
$("document").ready(function() {
    // code here
});

Instead of that, if you use:
$(window).load(function() {
    // code here
});

Your content will load prior to the cycle method being executed.

See this Stack Overflow article for the full explanation:
JQuery cycle fails on page refresh

Monday 25 June 2012

Errors everywhere - When Umbraco XSLT fails globally

I recently experienced a problem working on a large Umbraco site for a client where the site produced XSLT errors across the board, and none of the existing macros seemed to work.

It turned out that caching had been enabled on these macros, and that the app pool was recycled at some point during the night.

The Umbraco community is littered with examples of this error; a lot of them are in one particular post.

Tuesday 19 June 2012

Sitecore vs the world

In previous places of work, I've picked up skills from working with multiple content management systems. I've built websites using Drupal and Wordpress, but the main ones I've worked with have been built within EPiServer and Umbraco environments.

My latest experience however, has been working with the Sitecore environment.

For the purposes of this blog post, I'll talk a little bit about the three of them, and how they measure up to each other from a developer's point of view.

EPiServer
The majority of my work with EPiServer has been in V5.3 R2, and I discovered quite quickly that you can't use ASCX files as templates, which meant that we had to make our traffic drivers (slang for little bits of re-usable content across the site) implement ITrafficDriver (an interface we made up) just to know which ASCX file should be dynamically pulled in by the ASCX file's container ASPX page.

On the plus side, there's lots of help from people within the EPiServer community, and despite a little bit of waiting between responses, if there's a genuine big bug in the CMS, the support ticket system will get you heard by one of their staff in Sweden.

It also has geat access management, allowing you to reduce the visibility of bits of the content tree in the back-office to certain users.

I also found that it's really quite straight-forward to bolt extra bits of functionality onto EPiServer using custom modules that allowed me to create entirely new bits of the CMS (new content trees dynamically) using the API.

Umbraco
The great thing about Umbraco is that it's open-source and free. That's a massive boost to any small organisation. However, this free product comes at a price.

I was tasked with writing a wrapper CMS for the Umbraco CMS which would effectively encapsulate the underlying functionality, and obscure the back-office from users. This was because Umbraco couldn't do this, or multilingual stuff, by default.

The idea behind it was to ensure that users never saw the back-office, and so user permissions could be set to stop people viewing different parts of the content tree. This wasn't too difficult (although it was time-consuming and buggy) because there's great access to the entire Umbraco API.

However, you can't account for poorly written code, and version 4 (in any guise) of the Umbraco framework is poorly coded. Uncaught exceptions (empty catch blocks in try statements) and in-line SQL statements make for difficult code to read and debug.

Sitecore
My experience with SiteCore so far is very new. I've only been looking at it for a week, but I've already discovered the following benefits:

Package management in Sitecore's better
You can export templates, validation rules, content.. whatever you like! The process of doing this is simple, and the installer seems robust and facilitates pretty good deployment processes.

Better built-in support for logging within the UI
This is prettier (in my opinion) than EPiServer interface, and more reliable than Umbraco.

Handling of multiple databases for BIG sites
Sitecore has a tiny little drop-down icon that lets you choose between database environments. This can in turn allow you to publish content to multiple website environments from the same CMS. This struck me as pretty impressive, because it allows for better handling of load-balancing servers

Sitecore's very new to me, but I'm keen to learn more!