Dave Juth's Blog
20 November 2009
ASP.NET MVC Error Handling
I wrote a short how-to article about trapping unhandled errors in ASP.NET MVC web applications.
15 July 2009
Website is Back Up
Oh. My. Goodness. My website has been down for a couple weeks and I have been away and busy and just not on it. It's back up now. It was down due to a bug in a browsers settings file and the error message and "[No relevant source lines]" made it a little difficult to figure out.
19 June 2009
Google Calendar Sync in iPhone 3.0
This is good! It wasn't very straightforward until I found the URL specified on Google's Getting Started with CalDAV page (see the "Enable Google Calendar in Apple's iCal" link).
Here's how to set up your Google Calendar on your iPhone:
* On your iPhone, tap on Settings
* Open Mail, Contacts and Calendar
* Tap on Add Account...
* Select Other
* For Server, enter this URL and replace YOUREMAIL@DOMAIN.COM with your Google login/email address: www.google.com/calendar/dav/YOUREMAIL@DOMAIN.COM/user
* Enter your Google user name and password
* Click the Next button and it will attempt to connect to your Google Calendar and verify that your iPhone is set up correctly.
Now when you open your iPhone's calendar, you'll see all of your Google Calendar events. Very cool.
Here's how to set up your Google Calendar on your iPhone:
* On your iPhone, tap on Settings
* Open Mail, Contacts and Calendar
* Tap on Add Account...
* Select Other
* For Server, enter this URL and replace YOUREMAIL@DOMAIN.COM with your Google login/email address: www.google.com/calendar/dav/YOUREMAIL@DOMAIN.COM/user
* Enter your Google user name and password
* Click the Next button and it will attempt to connect to your Google Calendar and verify that your iPhone is set up correctly.
Now when you open your iPhone's calendar, you'll see all of your Google Calendar events. Very cool.
10 June 2009
ZIP Code Lookup with the Google Geocoder
I have a client who has ended up (due to a misunderstanding by me) with a list of user registrations having some addresses without ZIP codes. So I decided I could probably fix all of them by using a ZIP code lookup service. The U.S. Postal Service has a web service for this, but after signing up and trying it I decided it was a real pain. The sign up process was a little heavy handed, but worse was the test server they restricted you to using until your "application" was "ready," and then you would need to contact them for access to the real site.
I figured there must be a better way, so after some searching (including programmableweb.com and data.gov), I found that Google's Geocoder can do this.
So my simple, kind of raw REST demo of using the Google Geocoder for ZIP code lookup is working. Useful and simple, take a look.
I figured there must be a better way, so after some searching (including programmableweb.com and data.gov), I found that Google's Geocoder can do this.
So my simple, kind of raw REST demo of using the Google Geocoder for ZIP code lookup is working. Useful and simple, take a look.
Make Your Own Twitter Badge with jQuery
For the geekily inclined, you can easily make your own Twitter badge using jQuery and the Twitter API and display your latest tweets on your website. Step-by-step instructions. Do it now!
Performance of jQuery Selectors in ASP.NET
My old post about the problems with control ids in ASP.NET and using them with jQuery did not describe the small performance hit of using the "ends with selector," e.g., $("input[id$='txtDateStart']"). This post at encosia.com has some good tips for optimizing this a bit.
03 June 2009
jQuery and REST and Flickr and Google Maps Demos
My demo page has been updated, massively, over the past few weeks with REST demos using jQuery, Flickr, the Google Maps API and some other stuff. It's fun but I also want to build up to something more novel and powerful using these techniques. No promises, but fingers crossed that I find time to keep moving forward.
As far as blogging goes, well, it's been over two years since I've updated this. Doesn't bother me right now.
As far as blogging goes, well, it's been over two years since I've updated this. Doesn't bother me right now.
12 December 2007
ASP.NET Control IDs and jQuery
I'm using jQuery a lot more for DOM manipulation, UI sugar and general lessening of javascript drudgery. In an aspx page, referencing a control using js is a pain because of how ASP.NET munges the control name. For example, "ddlDate" becomes something like "ctl00_ddlDate." I found lots of articles about this and various approaches, but they all seemed a little too involved or fragile.
The good news is that the original control name always seems to be preserved at the end of the control id ("always" -- I hope that's correct and so far it's been working). So by using jQuery's select[id$= syntax, here is a really simple way to handle this:
The good news is that the original control name always seems to be preserved at the end of the control id ("always" -- I hope that's correct and so far it's been working). So by using jQuery's select[id$= syntax, here is a really simple way to handle this:
<script language="javascript" src="Images/jquery-1.2.1.pack.js"></script>
<script language="javascript">
$(document).ready(function(){
var ddlDate = $("select[id$='ddDateName']");
var txtStart = $("input[id$='txtDateStart']");
var txtEnd = $("input[id$='txtDateEnd']")
etc...
18 October 2007
SQL Server 2005 Management Studio RANT #1
Number one of many rants about Management Studio, not #1 as in this is the worst thing about MS, because it's not the worst thing:
Scripting a database to SQL Server 2000 (i.e. 8.0) format using the Tasks | Generate Scripts feature gives me a nice .sql script file. But when I run it in SQL 2000's Query Analyzer, it fails with syntax errors! Specifically, it's missing open parentheses on multiple lines. You've got to be kidding!
I know I can fix all of these errors, and I know there are products that do a better job of scripting database schema than Microsoft's own tools. But, this should work.
If I have time and continue with the many complaints I have about Management Studio, the conclusion is obvious: we need a much better IDE for SQL Server 2005.
Scripting a database to SQL Server 2000 (i.e. 8.0) format using the Tasks | Generate Scripts feature gives me a nice .sql script file. But when I run it in SQL 2000's Query Analyzer, it fails with syntax errors! Specifically, it's missing open parentheses on multiple lines. You've got to be kidding!
I know I can fix all of these errors, and I know there are products that do a better job of scripting database schema than Microsoft's own tools. But, this should work.
If I have time and continue with the many complaints I have about Management Studio, the conclusion is obvious: we need a much better IDE for SQL Server 2005.
26 July 2007
Microsoft Open Source Website
At first I thought this was an April Fool's joke. In July. Just not really sure what to think now.
20 July 2007
Collaborative Editing
I like reading Dare Obasanjo's blog. Though he is frequently too pro-Microsoft and, as one would expect, anti most-of-the-rest, he raises interesting issues and his counter arguments to much of the web zeitgeist is insightful. But I'm not sure why he can't use something besides MS Word to edit a document with his coworkers.
One of his usual targets, Google, offers Google Docs, which is a great way to collaborate with a small team. I have used this to collaboratively edit documents, and I like it a lot for small group collaboration. And it's only one of maybe a hundred or more different web-based solutions for collaborative editing.
The irony, of course, is that Dare is pointing out a key shortcoming of the traditional and still present Microsoft way. That is, fat-client, file-based solutions to things. Maybe that is still the problem with many of the Microsofties (and I certainly see this from time to time with people I know well who have a strong Microsoft background) and with the company in the larger sense -- that the web is still this thing that they don't feel too comfortable embracing to solve typical problems.
One of his usual targets, Google, offers Google Docs, which is a great way to collaborate with a small team. I have used this to collaboratively edit documents, and I like it a lot for small group collaboration. And it's only one of maybe a hundred or more different web-based solutions for collaborative editing.
The irony, of course, is that Dare is pointing out a key shortcoming of the traditional and still present Microsoft way. That is, fat-client, file-based solutions to things. Maybe that is still the problem with many of the Microsofties (and I certainly see this from time to time with people I know well who have a strong Microsoft background) and with the company in the larger sense -- that the web is still this thing that they don't feel too comfortable embracing to solve typical problems.
02 May 2007
Guerrilla Spam Filtering
We use Exchange and Outlook at the office for email, like too many organizations in the world. However, we have (I am embarrassed to say) not been using any spam filtering service or appliance. The result is that junk mail rolls in heavily each day.
In Outlook, you can create rules and I have a billion of them to filter out keywords and known sources of spam (to the extent that that is effective). It is an OK 80% solution. But when I'm away and using Outlook Web Access, none of my desktop Outlook rules are applied.
So my solution has been to leave Outlook running on my desktop at work. Then, when I run OWA my inbox is cleaned to the 80% level that my Outlook rules accomplish. This is not recommended and is not a good spam filtering strategy, but it actually works.
In Outlook, you can create rules and I have a billion of them to filter out keywords and known sources of spam (to the extent that that is effective). It is an OK 80% solution. But when I'm away and using Outlook Web Access, none of my desktop Outlook rules are applied.
So my solution has been to leave Outlook running on my desktop at work. Then, when I run OWA my inbox is cleaned to the 80% level that my Outlook rules accomplish. This is not recommended and is not a good spam filtering strategy, but it actually works.
03 April 2007
ASP.NET Master Page Title
There are a lot of articles about how to programmatically set the title for pages on a web site, such as this one at odetocode.com. But I need to do something much more mundane -- set the title for ALL pages on my website to be the same but optionally, to be able to programmatically override the title for any web page.
The straightforward way to use the same title for all pages if you are using master pages is to 1) set the <title> in the master page's HTML and 2) remove it from every content page's page directive. That works great. But I want to make this a little easier to maintain, so I hooked this into my site's base page and use web.config to maintain the site's title.
My site's base page (BasePage), from which all of my site's pages are derived, is declared like this:
In the BasePage constructor, I get the site's title from web.config and trap for any exception:
This also allows me to forget about manually removing "Untitled Page" from the page directive in every content page. To programmatically set the title, should I need to, I expose a property that prevents setting it to blank (or to the default Visual Studio .NET setting):
Now, to maintain my site's title, I just set it in web.config:
So this gives me a fairly clean way to maintain my site's title and also to override it per content page should I need to. For example, my contact page can append " - Contact Us" to the existing title by modifying the base page's Title property in Page_Load.
Seems like a lot of work for something so simple. I'm sure there may be a cleaner way, but for now this seems fairly good.
The straightforward way to use the same title for all pages if you are using master pages is to 1) set the <title> in the master page's HTML and 2) remove it from every content page's page directive. That works great. But I want to make this a little easier to maintain, so I hooked this into my site's base page and use web.config to maintain the site's title.
My site's base page (BasePage), from which all of my site's pages are derived, is declared like this:
public class BasePage : System.Web.UI.Page
In the BasePage constructor, I get the site's title from web.config and trap for any exception:
public BasePage()
{
try
{
Title = WebConfigurationManager.AppSettings["SiteTitle"];
}
catch (Exception ex)
{
System.Diagnostics.Trace.WriteLine(ex.Message);
Title = "My Site";
}
}
This also allows me to forget about manually removing "Untitled Page" from the page directive in every content page. To programmatically set the title, should I need to, I expose a property that prevents setting it to blank (or to the default Visual Studio .NET setting):
public new string Title
{
get
{
return base.Title;
}
set
{
if (value.Trim().Length > 0 && value != "Untitled Page")
{
base.Title = value;
}
}
}
Now, to maintain my site's title, I just set it in web.config:
<add key="SiteTitle" value="My Site" />
So this gives me a fairly clean way to maintain my site's title and also to override it per content page should I need to. For example, my contact page can append " - Contact Us" to the existing title by modifying the base page's Title property in Page_Load.
Seems like a lot of work for something so simple. I'm sure there may be a cleaner way, but for now this seems fairly good.
21 March 2007
Visual Studio Find using RegEx
Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaargh!
Why couldn't Microsoft make the regex search use some existing regex dialect? Maybe they did (randomly!) pick one, but it's not the regex dialect they use in the .NET Framework.
I admit to always finding another way when faced with searching for something like the following lines (which happens to be a task for me today), but I'm keen to do it more easily:
CREATE PROCEDURE
CREATE PROCEDURE
So using Microsoft's VS.NET syntax, it's this:
^create:Zs+procedure
That's good to know: :Zs is a SPACE, and :Zs+ is ONE OR MORE SPACES. And if I want to find these:
CREATE TABLE
CREATE FUNCTION
CREATE VIEW
It's this:
^create:Zs+:w
So :Zs+:w is ONE OR MORE SPACES FOLLOWED BY A WORD.
And I'm going to refer back to this post because I won't remember this two months from now.
Why couldn't Microsoft make the regex search use some existing regex dialect? Maybe they did (randomly!) pick one, but it's not the regex dialect they use in the .NET Framework.
I admit to always finding another way when faced with searching for something like the following lines (which happens to be a task for me today), but I'm keen to do it more easily:
CREATE PROCEDURE
CREATE PROCEDURE
So using Microsoft's VS.NET syntax, it's this:
^create:Zs+procedure
That's good to know: :Zs is a SPACE, and :Zs+ is ONE OR MORE SPACES. And if I want to find these:
CREATE TABLE
CREATE FUNCTION
CREATE VIEW
It's this:
^create:Zs+:w
So :Zs+:w is ONE OR MORE SPACES FOLLOWED BY A WORD.
And I'm going to refer back to this post because I won't remember this two months from now.
14 March 2007
Durable Information
Email is a wonderful way to communicate. I keep in touch with people I would have long ago neglected because of it. It's almost free. It's fast. It's super convenient. It's flexible, and allows me to send not just my words, but pictures, files, links and even viruses if I'm irresponsible. It's fairly private, and can be totally secure if I choose the right provider and tools. OK, spam is a nightmare, but more or less, email is a great thing for communicating.
Unfortunately, email doesn't cut it for storing information that is durable, a phrase that I'm sort of adopting or making up for the context of this post. By durable I mean the following:
I work on a small software project (I am the sole developer) for a client of my employer. The client and I communicate frequently about the state of project tasks, why certain things aren't meeting his needs, etc. This goes on over the course of one "release" of the project, or about six weeks. He emails me, I respond, he responds, etc. Then I find a problem, I fire off a message, he ignores it, we revisit it a month later without remembering much about it... and on and on.
This is typical small company project management, which is to say, it's not being managed. The only way to get a complete picture of what was decided, how it was interpreted, what is left to do, how to login to the FTP server -- whatever -- is to sift through and piece together artifacts from the following:
Yes, my company owns my inbox, but that is not the point. The issue is how to reduce the friction created by my laziness. That my company owns the information that I am mismanaging means I am obligated to store these bits of decisions and communications in such a way that it can be found. Found by my managers, auditors, the next developer who takes on this project after I'm on another, the client, QA people so they can tweak a test plan et. al.
At the very least, emails should be saved off to a public folder. This is a pain, however. A good project management tool (such as basecamp) is much preferred for aggregating this type of communication and preserving it. Email should be used for notification or quick comments. Anything more long-lived than spur-of-the-moment has to be put somewhere else. Then it can be searched, prioritized, validated, transferred and generally managed. Durable information, not fleeing bits of conversation scattered across multiple inboxes.
Unfortunately, email doesn't cut it for storing information that is durable, a phrase that I'm sort of adopting or making up for the context of this post. By durable I mean the following:
- It needs to be found later by someone other than the email sender or recipient
- It constitutes a record of a decision or other important information that is of interest to a group outside of the email sender or recipient
- It is "owned" by someone other than the email sender or recipient.
I work on a small software project (I am the sole developer) for a client of my employer. The client and I communicate frequently about the state of project tasks, why certain things aren't meeting his needs, etc. This goes on over the course of one "release" of the project, or about six weeks. He emails me, I respond, he responds, etc. Then I find a problem, I fire off a message, he ignores it, we revisit it a month later without remembering much about it... and on and on.
This is typical small company project management, which is to say, it's not being managed. The only way to get a complete picture of what was decided, how it was interpreted, what is left to do, how to login to the FTP server -- whatever -- is to sift through and piece together artifacts from the following:
- My inbox
- My sent items
- The client's inbox
- The client's sent items
- Our IM logs
- Some documents squirreled away on my company's network
- His [My Documents] folder
- etc.
Yes, my company owns my inbox, but that is not the point. The issue is how to reduce the friction created by my laziness. That my company owns the information that I am mismanaging means I am obligated to store these bits of decisions and communications in such a way that it can be found. Found by my managers, auditors, the next developer who takes on this project after I'm on another, the client, QA people so they can tweak a test plan et. al.
At the very least, emails should be saved off to a public folder. This is a pain, however. A good project management tool (such as basecamp) is much preferred for aggregating this type of communication and preserving it. Email should be used for notification or quick comments. Anything more long-lived than spur-of-the-moment has to be put somewhere else. Then it can be searched, prioritized, validated, transferred and generally managed. Durable information, not fleeing bits of conversation scattered across multiple inboxes.
13 March 2007
Harder to Type, Easier to Maintain
One of my favorite software books, Steve McConnell's Code Complete, rails against hard-coded strings because they make an application difficult to maintain. At "typing time," it's the easiest and quickest way to do things. Thereafter, though, you've created a debt that you will pay down for the life of a project.
When we work with DataSets, DataTables and other means of packaging data, it is hard to avoid littering code with literal strings that represent column names. This is not helped by our tools -- how hard would it have been for Microsoft to expose an enum for column names from a DataTable object in a generated typed DataSet? (And why is "strongly-typed DataSet" preferred over "typed DataSet"?)
Several years ago, when I first started using .NET, I created an database column enum and constants generator. Point it to a database and let it generate enums or constants (in C# or VB.NET) for every table and view, or pick the ones you want. It's part of the suite of tools I use to create the data layer and business layer in my applications.
It's a simple thing. So instead of writing code like this:
I can write it like this:
A typical application is going to have references to columns in many places:
If When your database schema changes, just regenerate the source file for the enums/constants and fix up your code. Most changes will now break the application at compile time, not run time. This makes it trivial to identify and fix these issues. A large application with thousands of references and many developers just cannot be maintained with hard-coded strings for column names. Catching these errors at run time requires 100% code coverage, and happens way too far downstream to NOT lose you money.
GridViews and other designer-based code may still be a chore, but you're not making things worse because that's how they are whether or not the rest of your source code has hard-coded values.
Does "LastName" look nicer than CustomerColumn.LastName? Probably. Is "LastName" easier to type? I'd say so. But it creates a mess, one that too many developers are keen to accept, unfortunately. Keep in mind, what I'm talking about here is for developers who haven't adopted object relational mapping or other means of encapsulating and abstracting object data. If you're still using DataSets and DataTables, you need to consider the friction your are accepting by hard-coding string column names, primary keys, default values, filters and so forth.
When we work with DataSets, DataTables and other means of packaging data, it is hard to avoid littering code with literal strings that represent column names. This is not helped by our tools -- how hard would it have been for Microsoft to expose an enum for column names from a DataTable object in a generated typed DataSet? (And why is "strongly-typed DataSet" preferred over "typed DataSet"?)
Several years ago, when I first started using .NET, I created an database column enum and constants generator. Point it to a database and let it generate enums or constants (in C# or VB.NET) for every table and view, or pick the ones you want. It's part of the suite of tools I use to create the data layer and business layer in my applications.
It's a simple thing. So instead of writing code like this:
30 dRow["LastName"] = "Smith";
31 dRow["FirstName"] = "Bob";
I can write it like this:
30 dRow[CustomerColumn.LastName] = "Smith";
31 dRow[CustomerColumn.FirstName] = "Bob";
A typical application is going to have references to columns in many places:
- Assigning values to columns
- Persisting data
- Filtering (using the DataTable's .Select method or a DataView's .RowFilter property)
- Sorting
- Ad-hoc querying
- Data binding and other tasks above in the designer for a form or web control
GridViews and other designer-based code may still be a chore, but you're not making things worse because that's how they are whether or not the rest of your source code has hard-coded values.
Does "LastName" look nicer than CustomerColumn.LastName? Probably. Is "LastName" easier to type? I'd say so. But it creates a mess, one that too many developers are keen to accept, unfortunately. Keep in mind, what I'm talking about here is for developers who haven't adopted object relational mapping or other means of encapsulating and abstracting object data. If you're still using DataSets and DataTables, you need to consider the friction your are accepting by hard-coding string column names, primary keys, default values, filters and so forth.
12 March 2007
Agile FUD
Why do some people argue as though "agile" is some grand methodology, a prescribed process, comes with a big book and a two week training course and has a rigid set of mantras? It doesn't as best I can tell. I have adopted some agile practices in my daily routine, such as unit testing (though not TDD as of now), intensive refactoring, simple and frequent communication and continuous integration. There are a lot of agile practices that I do not have the expertise to try right now, or the right projects to try them on or the right team to try them with. I am basically fitting what I can into my current environment, reading all I can and reexamining every month or so to identify what else I can apply or do differently.
I think this is the whole point of agile. That is, there is no one prescribed way. Rather, there are principles or fundamentals, and the practices reflect these in a sort of toolbox approach. And above all, the personnel involved are intelligent and experienced enough to use these things to improve how they create software and in turn improve the software they create.
For example, sometimes we have projects that are remediation types of engagements. In other words, we are hired to fix something and it may last for a month. Can I do TDD? Should I set up a continuous integration server? It depends. I have enough experience to know which tools to apply to solve this problem.
So when I read something like this that portrays agile as some all-or-nothing "methodology," it makes me wonder first, if people like this are using some prescribed methodology already or instead have none whatsoever, and second, if they aren't just looking for a way to bash these ideas by portraying them as something they are not. That post at secretgeek makes it sound as though agile is loaded with ideas that contradict each other. I don't think this is the gist at all.
One tool from the toolbox that has had a huge impact on how I write code is designing for testability. This is one of the first things I consider when creating any code. It forces me to think about:
Pretty standard stuff. I am amazed that it's still not universal.
I think this is the whole point of agile. That is, there is no one prescribed way. Rather, there are principles or fundamentals, and the practices reflect these in a sort of toolbox approach. And above all, the personnel involved are intelligent and experienced enough to use these things to improve how they create software and in turn improve the software they create.
For example, sometimes we have projects that are remediation types of engagements. In other words, we are hired to fix something and it may last for a month. Can I do TDD? Should I set up a continuous integration server? It depends. I have enough experience to know which tools to apply to solve this problem.
So when I read something like this that portrays agile as some all-or-nothing "methodology," it makes me wonder first, if people like this are using some prescribed methodology already or instead have none whatsoever, and second, if they aren't just looking for a way to bash these ideas by portraying them as something they are not. That post at secretgeek makes it sound as though agile is loaded with ideas that contradict each other. I don't think this is the gist at all.
One tool from the toolbox that has had a huge impact on how I write code is designing for testability. This is one of the first things I consider when creating any code. It forces me to think about:
- How to structure the solution in Visual Studio .NET. I always create a "
.Tests" project in addition to the main project (it used to be ".Test" until one of the versions of the Test Driven .NET add-in for VS.NETcaused problems in projects named this way). Instead of creating some goofy form to drive my tests and therefore requiring me to manually test each time I make changes, all of my public methods can be launched through my unit tests. Right-click, Run Tests. Low stress. This is pretty standard stuff. - How to design a class. When thinking about how to test it, you must immediately consider what the constructor does, how to initialize state, whether a static class is a good idea for the scenario, etc. Having to change these things later puts you into a much more defensive mentality, where you feel like you are constantly catching up and rethinking very basic design decisions. Again, pretty standard. Thinking about testing forces better design decisions.
- How to design its interface. When you know how your class can be used (i.e., because you thought about "exercising" it with unit tests at the outset), your interfaces are much cleaner and more consistent throughout a project.
- How and where to initialize and destroy resources. Basic stuff, but again, having to tweak things later can be annoying, and being inconsistent has subtle implications for robustness.
- Where to stash configuration information so that it is accessible in a very flexible way. Does the class need a SqlConnection? How do you provide it? Do you have different databases for development, testing, staging, performance testing and production? Instead of cluttering the class with code that accepts these, is it better to make it config-based? If you do, your ".Tests" project needs to be defined a certain way with its own config file, etc...
- How NUnit can automatically run my unit tests to help me control regression. Now that you've thought through these other things, everything is set up to do this. You can make changes and perform major refactoring at will and know what the health of your application is.
Pretty standard stuff. I am amazed that it's still not universal.
09 March 2007
ASP.NET 2.0 DataBinding Redux
In a previous post I mentioned my skepticism about using ObjectDataSource to bind business objects in ASP.NET 2.0. My desire to "implement some well-defined interfaces in our business objects and just have this stuff work" is touched on by the EntitySpaces guys, but it's more than just what I mentioned and it's touched a nerve.
I think the musings that Microsoft may have yanked what support there was for design-time data binding in 2.0 in order to develop their own ORM functionality in ADO.NET v.next is likely correct. Just like Sandcastle-NDoc, Team System-NUnit-NAnt and the other things I mentioned previously, it seems Microsoft is going to go their own way and quash a good portion of the open source efforts to build on their tools.
I think the musings that Microsoft may have yanked what support there was for design-time data binding in 2.0 in order to develop their own ORM functionality in ADO.NET v.next is likely correct. Just like Sandcastle-NDoc, Team System-NUnit-NAnt and the other things I mentioned previously, it seems Microsoft is going to go their own way and quash a good portion of the open source efforts to build on their tools.
SQL 2005 Script Files and VSS
Using Microsoft Visual SourceSafe 6.x and Microsoft SQL Server Management Studio (for SQL Server 2005) is a bit of a pain for scripting database changes. One problem is if you don't save the script file as ANSI, VSS cannot use its diff feature to show you the results. It will simply say, "Binary files differ." You need to save it as ANSI, since that's all VSS 6.x supports for visual diffs.
To save your .sql files as ANSI:
To save your .sql files as ANSI:
- Instead of just clicking the Save button, select File | Save As...
- Click the little arrow on the Save button and choose Save with Encoding...
08 March 2007
RFPs
Interesting take on RFPs at airbag. We have been occupied with some proposals recently, and I have to agree this is rarely a very fruitful way to land new business. So many of these RFPs are really for one or more of the following purposes:
Some of the comments to the airbag post really hit home:
- The company sending it out can satisfy some requirement that they found a consultant in a competitive manner
- They've already found a consultant and need to appear that they did it in a competitive manner
- They are looking for the lowest cost bidder and that is their top priority
- They are not clear about what they want and are soliciting ideas through the RFP process.
Some of the comments to the airbag post really hit home:
"The problem is rarely defined and the client usually has a vendor and solution already in mind."
"They take a lot of time, you only have a marginal chance of winning the work. It really just drives your overhead up with a lot of non-billable work, making the clients that didn't send you a RFP essentially pay more."
"The traditional RFP process is damn near a no-win situation for us. We ran the numbers and we've got a horrible track record when it comes to winning work of an RFP. Given the time it takes to address them (much more than other ways of bringing work in) we've realized that RFPs aren't really something we should be spending a whole lot of time on."
"Submitting a "proposal" of any sorts before there's been any communication is akin to ordering a bride online. It may work for some people but it's unlikely they'll have a relationship that'll last. Too much room for misunderstandings and miscommunications."
06 March 2007
Software Design's Dirty Little Secret
We spend a lot of time wringing our hands about how to do a better job gathering requirements. I think this bit reflects a very common scenario in how we convince clients that we're experts, and how things really end up going. We put forth some process or methodology that sounds impressive, like it's really going to squeeze every bit of uncertainty out of the project as early as possible and lead us down a clean, straight trail to the solution. Utter crap, yet we continue to push it because that is what clients demand we tell them. So the Design Observer lets us in on a version of the dirty little secret, one that applies in good part to software design as well:
If you want truly custom software to help manage a truly unique problem (and lots of people have such a need), you should know what it takes. Creativity. A prescribed process can't get you 100% of the way there. You must be willing to accept at least a good portion of the process to be empirical. That means it's not clean, it's not predictable and it's not going to make you comfortable.
When a prospective consulting firm approaches you and claims they have a neat, linear, sequential process that they have proven over countless years, they are setting your expectations in such a way that you are likely to be frustrated. In other words, they are partially to completely full of crap.
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem! Now, if it’s a good idea, I try to figure out some strategic justification for the solution so I can explain it to you without relying on good taste you may or may not have. Along the way, I may add some other ideas, either because you made me agree to do so at the outset, or because I’m not sure of the first idea. At any rate, in the earlier phases hopefully I will have gained your trust so that by this point you’re inclined to take my advice. I don’t have any clue how you’d go about proving that my advice is any good except that other people — at least the ones I’ve told you about — have taken my advice in the past and prospered. In other words, could you just sort of, you know...trust me?So, is that how it really goes? Well, most of the time, it sort of goes that way. It has to. If custom software development were not so nebulous and prone to reexamination and modification at every step, it wouldn't be custom software. In other words, you would instead solve your well-defined problem by going to CompUSA and buying a solution for $79.99. The next alternative would be to modify your problem so it could be solved by the $79.99 solution. Some people do just that, and successfully. Using Microsoft Excel is a good example. It's probably the most widely used database application on the planet, though it was never intended to be a database.
If you want truly custom software to help manage a truly unique problem (and lots of people have such a need), you should know what it takes. Creativity. A prescribed process can't get you 100% of the way there. You must be willing to accept at least a good portion of the process to be empirical. That means it's not clean, it's not predictable and it's not going to make you comfortable.
When a prospective consulting firm approaches you and claims they have a neat, linear, sequential process that they have proven over countless years, they are setting your expectations in such a way that you are likely to be frustrated. In other words, they are partially to completely full of crap.
27 February 2007
Software Architecture - Why it Matters
I have worked with a few people over the years whom I would describe as the "just get it done and soon" type. These guys (these few have all been men) eschew putting much thought into the software they build, or thinking about kaizen or continuous improvement. Working with them is frustrating because they tend to create systems that are better torn down than modified. In other words, thanks to their myopia and/or hubris (to the sales manager: "sure, we can do that in two weeks!"), the software they build has massive design debt and huge costs to maintain. (I am not going to get into the other extreme right now -- the guys who suffer from analysis paralysis -- but that, too, can be a big yet much different problem...)
The well known benefits of good design are written about throughout the software development world. But the one benefit that may be most important is this: good software architecture is motivational.
Working for a company that supports thoughtful software design ignites pride. Having people at all levels of a company that understand what good architecture is makes this possible. For me personally, it is a rush knowing that everyone from developers on my team to the CEO understands that this philosophy contributes to:
I have seen the architecture weenies that get so bogged down in design issues that they make slow and pained progress. However, that is usually when they are in an organization that is unbalanced and indulges them because it lacks good business people. It is more often, in my experience, that the siren song of "just get it done now, it's good enough" can influence people in almost any organization, and is therefore more dangerous.
The well known benefits of good design are written about throughout the software development world. But the one benefit that may be most important is this: good software architecture is motivational.
Working for a company that supports thoughtful software design ignites pride. Having people at all levels of a company that understand what good architecture is makes this possible. For me personally, it is a rush knowing that everyone from developers on my team to the CEO understands that this philosophy contributes to:
- A positive perception of our product
- Motivation among the team, and
- Increased profitability.
I have seen the architecture weenies that get so bogged down in design issues that they make slow and pained progress. However, that is usually when they are in an organization that is unbalanced and indulges them because it lacks good business people. It is more often, in my experience, that the siren song of "just get it done now, it's good enough" can influence people in almost any organization, and is therefore more dangerous.
26 February 2007
23 February 2007
OpenDNS
Scott Hanselman has a great post about OpenDNS which goes beyond what I know about it. However, a couple weeks ago I began using it due to frustrating issues with my ISP, and I think it's a good alternative to know about.
I live in an area that technological time has largely ignored. I like it that way except for needing seriously fast internet access. For the first year and a half, I used (and will not put a hyperlink to) Direcway, or HughesNet as they are now called. A satellite ISP. It was better than dial-up, and that is all I can say about it without bashing it.
Last summer I found out some of my neighbors used a local wireless ISP, which has several towers around the northern Shenandoah valley and will install an antenna for you to connect to its 802.11b access points. I am thirteen miles away from the nearest tower, but since I am 500 feet above the Shenandoah valley floor, I have a line of site to it. They hooked it up and it worked. It worked really well. The speed varies between 1mbs and 11mbs. A lot of the issues with satellite went away, such as weird timeouts with MS Outlook Web Access and the unholy bandwidth throttling (aka "Fair Access Policy"). Good riddance, satellite!
All seemed good for a while, but then I started having occasional trouble connecting. My calls to their tech support would usually end up with them saying, "we can ping your antenna, why don't you power it off and back up." Often that fixed the problem, but not always. Usually after some time things would work again. But I was getting really peeved with the lack of reliability.
I noticed one day that when opening a new tab in Firefox and experiencing the connection problems, some of my other tabs would still connect to their respective URLs. That made me think that this must be a DNS problem on my ISP's end.
I stumbled across OpenDNS in a podcast I was listening to (something from ITConversations, can't remember which podcast right now though). I modified my router's DNS configuration to use OpenDNS's servers instead of my ISP's. That was almost a month ago and my "connectivity" problems have not occurred since. So +1 for OpenDNS, and thanks to Scott for a lot more helpful info about it.
I live in an area that technological time has largely ignored. I like it that way except for needing seriously fast internet access. For the first year and a half, I used (and will not put a hyperlink to) Direcway, or HughesNet as they are now called. A satellite ISP. It was better than dial-up, and that is all I can say about it without bashing it.
Last summer I found out some of my neighbors used a local wireless ISP, which has several towers around the northern Shenandoah valley and will install an antenna for you to connect to its 802.11b access points. I am thirteen miles away from the nearest tower, but since I am 500 feet above the Shenandoah valley floor, I have a line of site to it. They hooked it up and it worked. It worked really well. The speed varies between 1mbs and 11mbs. A lot of the issues with satellite went away, such as weird timeouts with MS Outlook Web Access and the unholy bandwidth throttling (aka "Fair Access Policy"). Good riddance, satellite!
All seemed good for a while, but then I started having occasional trouble connecting. My calls to their tech support would usually end up with them saying, "we can ping your antenna, why don't you power it off and back up." Often that fixed the problem, but not always. Usually after some time things would work again. But I was getting really peeved with the lack of reliability.
I noticed one day that when opening a new tab in Firefox and experiencing the connection problems, some of my other tabs would still connect to their respective URLs. That made me think that this must be a DNS problem on my ISP's end.
I stumbled across OpenDNS in a podcast I was listening to (something from ITConversations, can't remember which podcast right now though). I modified my router's DNS configuration to use OpenDNS's servers instead of my ISP's. That was almost a month ago and my "connectivity" problems have not occurred since. So +1 for OpenDNS, and thanks to Scott for a lot more helpful info about it.
18 February 2007
ASP.NET ObjectDataSource and Business Objects
This past Friday's ASP.NET MVP Chat was very good, a lot of participation by well known experts. I learned a lot just sticking around reading both the experts' answers and the side banter.
Anyway, one issue that has been bugging me is whether to use declarative data binding in ASP.NET 2.0. So I asked Scott Guthrie if it was really a prime time thing or just something to "give good demo." He and Scott Mitchell both suggested it's the way to go, so I poked around and found the following:
ASP.NET 2.0 Data Tutorials
Peter Kellner's SQLDataSource Comments
Brendan Tompkins ObjectDataSource/GridView Experience
DataTableAdapter Article at CodeProject
Rocky Lhotka's ObjectDataSource Frustration
I am not liking what I see.
The gist of all this seems to be that the ASP.NET 2.0 data binding controls are fine for ADO.NET data objects, but when you have custom business objects, a lot of coding jujitsu may be required. In other words, not only are you not saving time because you have to code a fair bit to make things like sorting and filtering work, but you are taking a leap of faith that the magic stuff being done for you behind the scenes is going to just work. The posts on the ASP.NET forums indicate a lot of issues that may consume quite a bit of time to debug. Again, it seems like for ADO.NET data objects, it works mostly, but for custom business objects, the jury is out.
Peter Kellner's experience (above) is another twist. To use SqlDataSource when a query requires filtering based on the current user id (a pretty common requirement), he documents a somewhat arcane technique to get this to work declaratively. That's cool, but now we're creating more "maintenance debt," and I'm left wondering what else is lurking to trip me up if I go down this recommended path.
We really should be able to implement some well-defined interfaces in our business objects and just have this stuff work. That has to be the best approach, and I have to think this is very close to being possible right now. I am going to check this out a bit more before I give up and go back to "DataSource = " and "DataBind()" and other trusted but monotonous ways.
Anyway, one issue that has been bugging me is whether to use declarative data binding in ASP.NET 2.0. So I asked Scott Guthrie if it was really a prime time thing or just something to "give good demo." He and Scott Mitchell both suggested it's the way to go, so I poked around and found the following:
ASP.NET 2.0 Data Tutorials
Peter Kellner's SQLDataSource Comments
Brendan Tompkins ObjectDataSource/GridView Experience
DataTableAdapter Article at CodeProject
Rocky Lhotka's ObjectDataSource Frustration
I am not liking what I see.
The gist of all this seems to be that the ASP.NET 2.0 data binding controls are fine for ADO.NET data objects, but when you have custom business objects, a lot of coding jujitsu may be required. In other words, not only are you not saving time because you have to code a fair bit to make things like sorting and filtering work, but you are taking a leap of faith that the magic stuff being done for you behind the scenes is going to just work. The posts on the ASP.NET forums indicate a lot of issues that may consume quite a bit of time to debug. Again, it seems like for ADO.NET data objects, it works mostly, but for custom business objects, the jury is out.
Peter Kellner's experience (above) is another twist. To use SqlDataSource when a query requires filtering based on the current user id (a pretty common requirement), he documents a somewhat arcane technique to get this to work declaratively. That's cool, but now we're creating more "maintenance debt," and I'm left wondering what else is lurking to trip me up if I go down this recommended path.
We really should be able to implement some well-defined interfaces in our business objects and just have this stuff work. That has to be the best approach, and I have to think this is very close to being possible right now. I am going to check this out a bit more before I give up and go back to "DataSource = " and "DataBind()" and other trusted but monotonous ways.
Microsoft Help Needs a Laxative
Help in Microsoft tools used to be great. Of course, that was ten years ago. In Access 97 or VB (was it VB 6 back then?), I could hit F1 and within two or three seconds the topic would pop up and show me concise information in a well-structured help file. It doesn't take over my screen. Searching is fast. Freaking brilliant!
That was the pinnacle of Microsoft Help, at least for software developers, before .chm files and whatever this new crap is in Vista.
Fast forward to 2007. I'm in the venerable Visual Studio 2005. I have a dual core processor, big ass hard drive and more RAM than I ever thought I would need. I land on a .NET Framework object instantiation in my code and hit F1. I wait 30 seconds. I finally end up in the Microsoft Visual Studio 2005 Documentation application, which by the way looks similar to the thing launched from SQL Server 2005 Management Studio (and THAT THERE is a constipated beast of a tool, my comments coming soon...). The index tool window is unfiltered, and the contents text box is not synchronized with the topic I'm viewing. Neither is the contents window.
I click Search from the command bar, enter something interesting and click the Search button. It looks all over various Microsoft sanctioned web sites (MSDN, Codezone and some forums). I click Cancel after it has found some things and a big red X with "Search failed" is displayed.
Anyway... if the thing were just fast and lean, that would be great. I can use Google to search everything else much faster. Microsoft, you have made this help tool bloated and slow, it's not really helpful.
That was the pinnacle of Microsoft Help, at least for software developers, before .chm files and whatever this new crap is in Vista.
Fast forward to 2007. I'm in the venerable Visual Studio 2005. I have a dual core processor, big ass hard drive and more RAM than I ever thought I would need. I land on a .NET Framework object instantiation in my code and hit F1. I wait 30 seconds. I finally end up in the Microsoft Visual Studio 2005 Documentation application, which by the way looks similar to the thing launched from SQL Server 2005 Management Studio (and THAT THERE is a constipated beast of a tool, my comments coming soon...). The index tool window is unfiltered, and the contents text box is not synchronized with the topic I'm viewing. Neither is the contents window.
I click Search from the command bar, enter something interesting and click the Search button. It looks all over various Microsoft sanctioned web sites (MSDN, Codezone and some forums). I click Cancel after it has found some things and a big red X with "Search failed" is displayed.
Anyway... if the thing were just fast and lean, that would be great. I can use Google to search everything else much faster. Microsoft, you have made this help tool bloated and slow, it's not really helpful.
26 January 2007
Assess That Position
Monster has some good stuff sometimes. Their emails arrive in my inbox regularly and every now and then something good pops up. Here are some excerpts from "Know When to Take the Assignment," which is officially about considering temporary assignments:
It's critical to your professional identity and your career that you not allow yourself to fall for mediocre clients and mediocre projects.
You have to strive for impact.
Assignments should either give you experience with new skills, help solidify your expertise and focus on a particular discipline or fill in gaps where you know you don't have enough experience.Good advice for any position, not just a temp or contracting assignment.
Ask questions such as:
- What qualities do the most successful employees in your company possess?
- How does the team I'll be working with handle conflict or differing opinions?
- How does the company recognize employee accomplishments?
- What is the management style of the person I'll be reporting to?
- Describe the personalities and styles of the company leaders.
- What are the company's values?
- Which philanthropic charities does the company support?
24 January 2007
How to Hire an IT Sales Professional
How to Hire an IT Sales Professional? I wish I knew the answer to this. Most sales people, good and bad, are usually very good at talking and selling themselves. That means you could be their meat and end up getting snowed. So what can you believe?
They need a book of contacts to harvest leads, certainly. You need to know it's real. They need to show you good references, a whole bunch, and you need to follow up on those and sniff out the candidate's track record. You've already been assured by him that it's stellar, you need to verify that.
Does she listen? Sales people are great at talking. The bad ones do that much better than anything else. The good ones hear. Does she understand the nuances when you explain things? Does she get the details, not just the concept?
Has he been there and done that? Are you looking at a guy who has been in a completely different industry and assures you, by analogy and example, how his experience transfers to your industry? Be wary. It can work, but it's another hurdle, another red flag.
What is the candidate's personality? I hear about these archetypes for sales people, the hunter and the farmer. The former is aggressive, the hard sell type. The other is nurturing, establishes consensus and build relationships. I don't know why there is this idea of mutual exclusivity. You really don't need a hunter or a farmer after all. You need a skilled business rep who can communicate and close deals, knows when to push and knows when to wait, knows when to step up and knows when to turn it over to others in your organization. He can drive a hard bargain as well as cooperate. He is respected, not disdained.
What is the candidate expecting? Is he sitting in his chair all day? Is he out and about? Commission, tools to help him do his job (company car?), periodic review of his goals... and you? What are you expecting? Getting the wrong person is a big setback and can take a long time to recover from. Hire him on a trial basis, set the goals, and make the decision to retain or terminate at the close.
Here is an example of a job description for a sales rep at MWI. It's pretty clear:
They need a book of contacts to harvest leads, certainly. You need to know it's real. They need to show you good references, a whole bunch, and you need to follow up on those and sniff out the candidate's track record. You've already been assured by him that it's stellar, you need to verify that.
Does she listen? Sales people are great at talking. The bad ones do that much better than anything else. The good ones hear. Does she understand the nuances when you explain things? Does she get the details, not just the concept?
Has he been there and done that? Are you looking at a guy who has been in a completely different industry and assures you, by analogy and example, how his experience transfers to your industry? Be wary. It can work, but it's another hurdle, another red flag.
What is the candidate's personality? I hear about these archetypes for sales people, the hunter and the farmer. The former is aggressive, the hard sell type. The other is nurturing, establishes consensus and build relationships. I don't know why there is this idea of mutual exclusivity. You really don't need a hunter or a farmer after all. You need a skilled business rep who can communicate and close deals, knows when to push and knows when to wait, knows when to step up and knows when to turn it over to others in your organization. He can drive a hard bargain as well as cooperate. He is respected, not disdained.
What is the candidate expecting? Is he sitting in his chair all day? Is he out and about? Commission, tools to help him do his job (company car?), periodic review of his goals... and you? What are you expecting? Getting the wrong person is a big setback and can take a long time to recover from. Hire him on a trial basis, set the goals, and make the decision to retain or terminate at the close.
Here is an example of a job description for a sales rep at MWI. It's pretty clear:
That's good. Get someone who will do that!What We're Not Interested In
- Somebody to watch the phones. We've already got someone doing that for minimum wage.
What We Are Interested In
- Somebody who is out of the office 90% of the time meeting with people.
- A lead generator and closer. Find new business, get them interested, and close the deal. We'll take care of the rest.
What We're Offering
- 20% commission on project-based sales. You sell a $20K website, you get $4K.
- On SEO there is an initial commission plus a six-month residual. You close a $5K per month SEO deal and you get a commission equal to the monthly fee, plus 20% of the monthly fees for the next six months. That means if you closed one $5K/month deal per month you'd be bringing in $132K per year in commissions. A salesperson with the right experience, connections, and skills should be able to do double or triple that.
- Health benefits. Candidates who prove themselves during a two-month trial period will be offered full-time employment which includes health benefits.
- And of course, the soft stuff. A nice work environment, a growing, exciting industry, cool co-workers without egos, and almost complete autonomy.
- If you'd like to get a better idea of how we sell our search engine marketing services, download our search engine marketing sales presentation.
19 January 2007
Kick the Oil Habit?
This study by the Pacific Northwest Laboratory, a "research organization" (I guess you'd call it) that is jointly run by the DOE and Battelle, finds that the U.S. could power 84% of its vehicles if they were "plug-in hybrids." This is one of a number of promising ideas. I'd still want at least a back up gas tank for those times the power goes out around here.
18 January 2007
Management Getaway
Let us, then, be up and doing,A Psalm of Life
With a heart for any fate;
Still achieving, still pursuing,
Learn to labour and to wait.
William Wadsworth Longfellow
Longfellow could have been writing about the antithesis of the move into management that knowledge workers typically feel is their inevitable career path. Away from the daily problem solving and coding, and the "just get it done" mentality that is often unfortunately imposed. An "escape" to a world where they can have meetings, interface with suits, say the word "verticals" a lot and pretend their documents and MS Project files are part of the vision that drives the organization's success. Woefully misguided are these aspiring managers.
The trouble is, completely flocking from the daily doing and the gusto attitude of the alpha geek can often turn one into the "just-get-it-done-according-to-my-'plan' " managerial type. These managers may, due to their deteriorating expertise, gradually help prevent an organization from integrating its skills and internal knowledge that it needs to compete in the tech industry. In other words, the manager fleeing from his inner geek may help his company disintegrate, in many senses of the word. Like minded managers will coach him along that path, blissfully thinking they are right. People who are now becoming gradually less informed are making key decisions.
Good tech companies do not let managers get themselves completely out of the tech side of the business. They make an effort to keep them in the game at some level. Take Scott Guthrie at Microsoft. Scott is in charge of the teams that build the .NET CLR, ASP.NET, IIS, WPF and more. That's a lot of responsibility. Yet he spends significant time every week writing code, to stay sharp and help make better decisions. Good managers understand that the business dictates some of the skills they must maintain. A tech business dictates that you have to maintain quite a bit of tech knowledge if you are going to make good decisions and therefore be a good manager.
The trouble for organizations is that good managers are hard to come by. "Good" means many things. It includes realizing independently what will make you effective. It also includes delegating things that are not completely part of your expertise and granting autonomy to those implementing these pesky details. Good managers know that this means helping those who will make their team look good, and sharing in that success. Bad managers think they have to appear to think of, know, say and direct everything.
The result is that the fine knowledge plebes often have to communicate with managers that really don't understand their jobs anymore, and sometimes with managers who are just ineffective and blissfully unaware. They have checked out, and they mistakenly think that is a logical progression for their managerial career track. The temptation to "graduate" into management often begets a lackadaisical outlook. Things change rapidly. The luxury of checking out of the process of keeping up is a siren song of organizational ruin.
If you are going to be an effective manager in the tech field, you need to keep your chops sharp. Not daily nuts and bolts sharp, but the more you understand the more effective you will be.
16 January 2007
Exception and ApplicationException
I had this straight in .Net 1.x: Derive custom exceptions from System.Exception, not from ApplicationException. The reasoning: you want your exception hierarchies to be wide and shallow, not deep, and that ApplicationException didn't add anything to Exception anyway. Jeffrey Richter notes that the difference between the two types of exceptions were originally to signal a CLR exception vs. one thrown from an application, but that this distinction was not followed consistently. Therefore, stick with System.Exception.
In .Net 2.0 we seem to have conflicting directives from Microsoft, which I am assuming is an error. The MTS 70-536 exam study materials point out (on p.24 of the training kit book) that we should derive from ApplicationException. Seems to fly in the face of the previous recommendation and based on the original .Net 1.x recommendations which were revised. The latest MSDN docs appear to recommend deriving custom exceptions from System.Exception.
I'm going to stick with System.Exception as the base class for custom exceptions. Don't know why I ended up bothered by this today.
In .Net 2.0 we seem to have conflicting directives from Microsoft, which I am assuming is an error. The MTS 70-536 exam study materials point out (on p.24 of the training kit book) that we should derive from ApplicationException. Seems to fly in the face of the previous recommendation and based on the original .Net 1.x recommendations which were revised. The latest MSDN docs appear to recommend deriving custom exceptions from System.Exception.
I'm going to stick with System.Exception as the base class for custom exceptions. Don't know why I ended up bothered by this today.
10 January 2007
Apple Rocks but What is Missing?
More concerns about the iPhone, which I otherwise love as already stated:
- Battery à la the iPod, or can we replace it when it's old and tired?
- Support for Office docs?
- Support for Exchange Server?
- Can we expand it by installing our own software (like Palm, Symbian, and Microsoft phones all allow)?
Fools and Fanatics
The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts.
- Bertrand Russell
C# and VB.Net
Though this is now a barely simmering debate, my thoughts about which great .Net language to choose is pretty simple -- choose the better one.
For almost a decade I wrote applications using nothing but VB 3, 4, 5, and 6, or VBA (in Access), or VBScript (in old ASP), along with HTML a smattering of javascript, and quite a bit of SQL (T-SQL and Access SQL) for every application. But it was largely some form of VB, all day long, that I used to create my bread and butter apps. Before that was Paradox (PAL), Basic, and some macro languages for business apps. VB, though, became my career.
When .Net came out of beta in late 2001, I began a project writing a source code repository with content in both VB.Net and C#. I had to not only learn the .Net framework but also a fair bit that I was able to avoid in VB about object oriented design and programming (two different but very related things). For me, the new framework and OOD/P paradigm were the most effort, but learning two new languages thrown on top added to the effort.
Yes, two new languages. VB and VB.Net, while similar in syntax, are very different beasts.
So, as a long time VB/VBA/VBScript programmer learning both VB.NET and C#, it seems a no-brainer which one I took up and looked to as the better language...
It is C#. Why?
A more important point is that if you work with .Net, you need to know BOTH of these languages, no excuses.
For almost a decade I wrote applications using nothing but VB 3, 4, 5, and 6, or VBA (in Access), or VBScript (in old ASP), along with HTML a smattering of javascript, and quite a bit of SQL (T-SQL and Access SQL) for every application. But it was largely some form of VB, all day long, that I used to create my bread and butter apps. Before that was Paradox (PAL), Basic, and some macro languages for business apps. VB, though, became my career.
When .Net came out of beta in late 2001, I began a project writing a source code repository with content in both VB.Net and C#. I had to not only learn the .Net framework but also a fair bit that I was able to avoid in VB about object oriented design and programming (two different but very related things). For me, the new framework and OOD/P paradigm were the most effort, but learning two new languages thrown on top added to the effort.
Yes, two new languages. VB and VB.Net, while similar in syntax, are very different beasts.
So, as a long time VB/VBA/VBScript programmer learning both VB.NET and C#, it seems a no-brainer which one I took up and looked to as the better language...
It is C#. Why?
- It is more concise, I would even say more elegant than VB.Net. The verbosity of VB.Net becomes stressful. The scannability of well-contructed C# is very good in comparison. I can, in short, read well-done C# code much more easily than well-done VB.Net.
- The amount of crappy VB.Net code due to VB developers diving into .Net is astounding, and VB.Net unfortunately lets you get away with a lot of sloppiness. I may get to explaining that in another post, I have a lot to say about it...
- Developer tools tend to be done for C# more than VB.Net. In fact, many only support C#. Though that is changing, VB.Net support is often an afterthought.
- The background compiler in VB.Net can sometimes be nice when you're making quick changes, but it is more often intrusive and stressful while coding. For example, if I want move off an incomplete line to copy and paste something else, I don't need the squiggles or the closing parens added when VB.Net doesn't know what I mean; tools like CodeRush are interfered with by the background VB.Net compile thread, especially in Visual Studio 2003; moreover, try VB.Net on a large solution (I have) and you will run into serious frustration and the need to restart Visual Studio regularly during your day to clear the futzed state the IDE gets in.
- C# has always caught things like unused variables, unreachable code, and non-void methods that don't return a value as warnings or errors. VB.Net may have solved one or two of these in the 2005 release, though.
- Option Strict and Option Explicit. These legacy VB-isms should have been thrown out. Other than the need for this in some Office Automation code, I can't see why you would ever want to not have these both ON. And way too many classic VB developers who come to .Net don't turn them on at both the project and source file level.
- I hate the line continuation character, and you think I would have gotten used to it from VB, but I just never have liked it. Especially in .Net when you're working with attributes, the C# syntax is much less confusing, more writable and just easy to do.
- Array syntax in VB and VB.Net, they just should never have used parens instead of square brackets when VB.Net came along. The thing that ticks me off most is when I go back to a C# project after working on a VB.Net project and keep smacking myself when I use parens for an array and hit compile errors. I hate that.
- Sometimes I actually like the background compile! But not usually, and I wish it could be simply turned off.
- Delegates and using WithEvents to simplify event handling. In VB.Net, that is a real boon to developer efficiency.
A more important point is that if you work with .Net, you need to know BOTH of these languages, no excuses.
Apple Rocks the World
Steve Jobs' announcement at yesterday's Macworld conference about the iPhone is big. I have been reading a variety of opinions about how this impacts the world of cell phones, including Nokia, RIM and the like. I think as this news is digested, there is still more upside from this announcement if you're an investor or a potential investor in apple, as I believe people will realize the following:
Open questions:
- So called "smartphones" today are generally awful. Usability is poor, too many compromises (some have touch screens, some don't; some have good keyboards, some are lousy; some are actually decent phones and are mediocre at their other functions, most are not so good; some function as music players, others can but not well, or have incompatible headphone jacks, on and on...), just too much to not like in any example. I have looked at the Blackjack, Cingular 8525, the Motorola Q, various Blackberries and Treos. They all have some compelling features (keyboard, push mail, WiFi on the 8525) and some infuriating omissions (no touch screen on the Blackjack, the Windows OS is problematic, the Palm OS is showing its age, the Blackberry is great for mail but mediocre for everything else). The iPhone has it all, and the non-phone features are finally all there and appear to be done right.
- Standard form factor. Look around, these things are anywhere from really thin, too thick, too heavy, unable to fit into a jeans pocket -- there is no standard. Apple is going to become the de facto form factor and design standard. Its simple design looks to make "gadget use tension" evaporate. I am not going to have to concentrate so hard and make five clicks with a crappy scroll wheel/button to record a voice memo to myself, for example.
- We are sick of carrying around all these specialized gadgets. A smartphone with tons of unused memory sits in my bag next to my iRiver 40gb mp3 player, my Garmin GPS, and my laptop which is with me even on occasions where I just may use it but only need to read email and surf. I don't have a PDA, but many people throw that into the mix, too. Here is where Apple is really separating itself from the lackluster crowd. This device looks to make the need for separate devices to handle these features largely obsolete right when it is launched in June, and eventually, completely obsolete.
- Connectivity. Every smartphone these days should have Bluetooth 2.0, WiFi b and g, as well as its carrier's 3G access built in. They usually have one and a half or two out of three. WiFi is the iPhone's coup de grace. The Blackjack should have this. The Nokia E62 should have this. Most smartphones have gorgeous screens and close to enough real estate to let you surf using whatever connection your home, office or coffee shop infrastructure offers. Most carriers cripple the WiFi so that you are forced to use their expensive 3G plans for internet access. Unless you travel a lot or are otherwise forced, you simply pull out another device to surf and check email. A device with a beautiful screen like the iPhone, and usable in landscape mode makes a great hand-held browser.
Open questions:
- Will the on-screen keyboard be usable?
- Is the case going to be durable (see how an old iPod wears, not too well)
- Will 3G access be available soon? (It won't be for the June release)
- Is it as easy to use as all those iPhone videos on the web make it seem?
- Is the call quality great?
- Does it "boot up" or just turn on?
- I'm still lukewarm about iTunes, will that improve? It seems integral to this device.
- Does it come with a charger, a decent case, and hopefully a car charger?
- Does bluetooth work as well as it should?
- Will a competitor emerge that gets the design just as right?
- Is the memory expandable?
04 January 2007
The Construction Analogy
This is a common way for people with little experience to analyze or (gasp) explain the process of creating software. The problem is, building a house and creating a custom software application have little in common. Not nothing, but very, very little.
Here is a classic situation. Joe the sales guy is explaining his company's software development process to a potential client. The client questions why a large requirements document is necessary. Joe gets animated (apparently, he's really experienced) and says, "Well, it's like you're building a house. You need a blueprint. Your builder needs to know how many rooms, where the plumbing, electrical and HVAC goes, where the bathrooms are... heck, it's not like you can get half way through building a 2500 square foot colonial and then decide that you really need a 4000 square foot ranch."
Actually, the "2500 square foot colonial --> 4000 square foot ranch" is exactly what you frequently do on a software project. In fact, if you can't do that for your clients you're likely locked into a very poor development process. But there is much more to why this analogy has little merit.
How about some more Joe-isms about software development:
When you build a house, you have to follow the waterfall methodology. That is, you need a plan that is complete, then you begin construction, starting with excavating a hole in the ground for the foundation, then you pour the concrete, then you frame the ground floor, then you frame the walls, then the roof, etc. After that, the building inspector can look things over and approve it. Only then can the electrician come in and run the wiring. Et cetera.
The waterfall process, which is so core to the construction industry, is a very poor approach for most (but not all) software development. Why is it poor? Because it locks you into a fixed course in a process that requires constant discovery, revision and reaction to changing business requirements, stakeholder feedback, and competitive pressures. Your building just needs to get finished so the family can move in, or so the software company can set up its servers and start writing some code! Waterfall may be a good process (I'm not 100% convinced, but it may be) if the software you are creating is similar to many other projects you've done. Of course, if that's the case, you might be better off just finishing it one last time, boxing it up, and selling it at CompUSA for $49.99. You don't need a process for that ever again.
In what I feel is a more sinister result of this construction analogy, we have people known as project managers who have been trained to bless this waterfall approach to software development. I recently had a conversation on a flight to visit a client with a woman who works for EDS. She was a project manager and was complaining about how her clients are constantly changing their minds about software that was "under construction." I tried to think of a project I have worked on where that did not happen. I can't recall a single project.
The PMI (and I am not an expert at their certification but have worked with people who are PMI certified) seems to endorse these staid methodologies. I may be very wrong about their focus, but when I read an overview of the PMP credential, I see that it includes the following: "initiating the project, planning the project, executing the project, monitoring and controlling the project, closing the project..." This smacks of a construction project in my book, not software development. The few people I have experience working with who are PMP certified seem to endorse this. Again, in some projects this may be a valid approach (and again, I'm not convinced it is, but it could be sometimes).
So does all this mean I feel that planning and project management and some structure is bad for software development? No. You need to plan, manage, and rely on some structure. But it is different for a construction project than it is for a software project. And thankfully, the software industry is responding very well. The problems of estimating, collaborating, managing and creating software have been studied by a lot of very smart people. Martin Fowler, Scott Ambler, Alistair Cockburn, Grady Booch, Kent Beck, Ken Schwaber, and heck, a lot of mere mortals, even me.
The construction analogy is way off for most software projects. Sometimes it is good for explaining to a client the challenges in estimating costs, for example, or to illustrate the complexity (at a very high level) of developing software. I think any other use of this analogy stretches its validity, and I think people have to stop raising it as justification for almost everything.
The construction analogy is not valid. In fact, software development is mostly a unique activity.
Here is a classic situation. Joe the sales guy is explaining his company's software development process to a potential client. The client questions why a large requirements document is necessary. Joe gets animated (apparently, he's really experienced) and says, "Well, it's like you're building a house. You need a blueprint. Your builder needs to know how many rooms, where the plumbing, electrical and HVAC goes, where the bathrooms are... heck, it's not like you can get half way through building a 2500 square foot colonial and then decide that you really need a 4000 square foot ranch."
Actually, the "2500 square foot colonial --> 4000 square foot ranch" is exactly what you frequently do on a software project. In fact, if you can't do that for your clients you're likely locked into a very poor development process. But there is much more to why this analogy has little merit.
How about some more Joe-isms about software development:
- "You can't just rip apart the foundation while you're finishing the attic." Yes, you can.
- "You can't finish a brick house and then decide the windows are too narrow and change them." Yes, you can.
- "It would be like taking a single family house and turning it into an apartment." What's the big deal about that?
When you build a house, you have to follow the waterfall methodology. That is, you need a plan that is complete, then you begin construction, starting with excavating a hole in the ground for the foundation, then you pour the concrete, then you frame the ground floor, then you frame the walls, then the roof, etc. After that, the building inspector can look things over and approve it. Only then can the electrician come in and run the wiring. Et cetera.
The waterfall process, which is so core to the construction industry, is a very poor approach for most (but not all) software development. Why is it poor? Because it locks you into a fixed course in a process that requires constant discovery, revision and reaction to changing business requirements, stakeholder feedback, and competitive pressures. Your building just needs to get finished so the family can move in, or so the software company can set up its servers and start writing some code! Waterfall may be a good process (I'm not 100% convinced, but it may be) if the software you are creating is similar to many other projects you've done. Of course, if that's the case, you might be better off just finishing it one last time, boxing it up, and selling it at CompUSA for $49.99. You don't need a process for that ever again.
In what I feel is a more sinister result of this construction analogy, we have people known as project managers who have been trained to bless this waterfall approach to software development. I recently had a conversation on a flight to visit a client with a woman who works for EDS. She was a project manager and was complaining about how her clients are constantly changing their minds about software that was "under construction." I tried to think of a project I have worked on where that did not happen. I can't recall a single project.
The PMI (and I am not an expert at their certification but have worked with people who are PMI certified) seems to endorse these staid methodologies. I may be very wrong about their focus, but when I read an overview of the PMP credential, I see that it includes the following: "initiating the project, planning the project, executing the project, monitoring and controlling the project, closing the project..." This smacks of a construction project in my book, not software development. The few people I have experience working with who are PMP certified seem to endorse this. Again, in some projects this may be a valid approach (and again, I'm not convinced it is, but it could be sometimes).
So does all this mean I feel that planning and project management and some structure is bad for software development? No. You need to plan, manage, and rely on some structure. But it is different for a construction project than it is for a software project. And thankfully, the software industry is responding very well. The problems of estimating, collaborating, managing and creating software have been studied by a lot of very smart people. Martin Fowler, Scott Ambler, Alistair Cockburn, Grady Booch, Kent Beck, Ken Schwaber, and heck, a lot of mere mortals, even me.
The construction analogy is way off for most software projects. Sometimes it is good for explaining to a client the challenges in estimating costs, for example, or to illustrate the complexity (at a very high level) of developing software. I think any other use of this analogy stretches its validity, and I think people have to stop raising it as justification for almost everything.
The construction analogy is not valid. In fact, software development is mostly a unique activity.
01 January 2007
Meetings and Goals
Warren Buffet on CNBC tonight tells Liz Clayman that he doesn't have meetings with the managers of the businesses he owns, he just tells them what he expects. The woman who runs Borsheim's jewelry, which Warren Buffet owns, says at the start of each year, he sends her a single sheet of paper with the business goals for the year.
From my own experience, the amount of meetings tends to run inversely to the focus on goals. Meetings are often where poor managers assert their control, or stoke their egos. Very few things get resolved at planned meetings, fewer still when they are lengthy.
The less formal a meeting, the more effective it is. Short, frequent chats with team members can replace a lengthy weekly meeting with much better effect.
Meetings are good for brainstorming, mediocre for motivating, and poor for communicating in a level, focused, ego-free manner.
If you have to have a meeting, it needs to have a goal. Send out an agenda, keep it short and focused, and end it with a summary of the action items agreed upon to meet the goal. Follow up, too.
From my own experience, the amount of meetings tends to run inversely to the focus on goals. Meetings are often where poor managers assert their control, or stoke their egos. Very few things get resolved at planned meetings, fewer still when they are lengthy.
The less formal a meeting, the more effective it is. Short, frequent chats with team members can replace a lengthy weekly meeting with much better effect.
Meetings are good for brainstorming, mediocre for motivating, and poor for communicating in a level, focused, ego-free manner.
If you have to have a meeting, it needs to have a goal. Send out an agenda, keep it short and focused, and end it with a summary of the action items agreed upon to meet the goal. Follow up, too.
21 December 2006
Open Source vs. Microsoft
The success of the .Net platform has coincided with the rise of open source development, and spawned quite the variety of open source .Net tools. Microsoft, however, has decided not just to not embrace this (Codeplex aside), but to contribute to the death of these tools. And hey -- this post is not about Linux!
We have NUnit, and now Visual Studio Team System includes Team System Test, in many ways inferior. Integration of NUnit into the VSTS development environment via Test Driven .Net, along with NCover made a powerful unit testing toolkit.
We have NAnt and Cruise Control and Draco for build management and continuous integration. VSTS now has MS Build which is, again, not quite there.
We had NDoc, which has now been abandoned by the original developer because of the upcoming Sandcastle.
I'm not sure where the advantage is for Microsoft. The open source .Net community is thriving and has created a rabid group of developers that loves .Net. That's a group that is going to support Microsoft and not run away to use Linux, Apache, Java, LAMP, Ruby or other very capable and competing platforms. Had Microsoft included the full capabilities of these open source projects in its own tool set early on, everyone would have been happy. But now, the appearance is that the company wants to kill these things. There are so many options they could have pursued to integrate these tools by being inclusive and fostering goodwill toward the community... damn, I sound like a pansy... and to top it off, they haven't even done the analogous tools as well as the open source versions. Not only is the message clear ("we'll do it and you'll like it") but this creates a chilling effect on future open source projects.
It seems that many big companies sometimes just have their proverbial heads up their butts. When I see what Microsoft now offers (which is fantastic, no doubt) and consider the alternatives (Ubuntu, Java, Ruby on Rails, MySQL, on and on...), I'm not feeling so insecure about not choosing Microsoft for every project. The argument can no longer be confidently made about Microsoft, that "you know their tools are going to be the standard." That Microsoft has been in the cross hairs of many a competitor and come out on top nearly every time is, like the investment mavens say, "no guarantee of future performance."
We have NUnit, and now Visual Studio Team System includes Team System Test, in many ways inferior. Integration of NUnit into the VSTS development environment via Test Driven .Net, along with NCover made a powerful unit testing toolkit.
We have NAnt and Cruise Control and Draco for build management and continuous integration. VSTS now has MS Build which is, again, not quite there.
We had NDoc, which has now been abandoned by the original developer because of the upcoming Sandcastle.
I'm not sure where the advantage is for Microsoft. The open source .Net community is thriving and has created a rabid group of developers that loves .Net. That's a group that is going to support Microsoft and not run away to use Linux, Apache, Java, LAMP, Ruby or other very capable and competing platforms. Had Microsoft included the full capabilities of these open source projects in its own tool set early on, everyone would have been happy. But now, the appearance is that the company wants to kill these things. There are so many options they could have pursued to integrate these tools by being inclusive and fostering goodwill toward the community... damn, I sound like a pansy... and to top it off, they haven't even done the analogous tools as well as the open source versions. Not only is the message clear ("we'll do it and you'll like it") but this creates a chilling effect on future open source projects.
It seems that many big companies sometimes just have their proverbial heads up their butts. When I see what Microsoft now offers (which is fantastic, no doubt) and consider the alternatives (Ubuntu, Java, Ruby on Rails, MySQL, on and on...), I'm not feeling so insecure about not choosing Microsoft for every project. The argument can no longer be confidently made about Microsoft, that "you know their tools are going to be the standard." That Microsoft has been in the cross hairs of many a competitor and come out on top nearly every time is, like the investment mavens say, "no guarantee of future performance."
14 December 2006
13 December 2006
More Podcasts
TWIT has some very good podcasts. Leo Laporte's KFI program (soon to be called something else, I think, because he is leaving the radio station on which the podcast is based) showcases his knowledge about almost everything tech. He fields calls from the confused masses on topics ranging from HDTV to Linux, Mac and Windows. Steve Gibson's Security Now (hosted by Leo) is extremely interesting. Steve is the man behind SpinRite and his website is a good source of utilities and security information. The production quality on TWIT is top notch. Leo has that classic deep radio tenor, this marriage of radio and podcast works well.
08 December 2006
Encrypt your Thumb Drive
TrueCrypt is a great, open source, free encryption tool for thumb drives. I have been using this for several months now on my 2gb thumb drive, using AES encryption, and it has been working flawlessly. I created mount.bat and unmount.bat batch files to hook it up and take it down. I couldn't get autorun.inf to do this automatically when I connect the drive on Windows XP, maybe I will come back to that when I have time. Highly recommended.
Some good links for how to set it up and configure a mountable thumb drive:
http://glosoli.blogspot.com/2005/09/encrypted-thumb-drive-and-autoplay.html
http://lifehacker.com/software/portable-applications/hack-attack-quicklaunch-your-usb-workspace-182792.php
Some good links for how to set it up and configure a mountable thumb drive:
http://glosoli.blogspot.com/2005/09/encrypted-thumb-drive-and-autoplay.html
http://lifehacker.com/software/portable-applications/hack-attack-quicklaunch-your-usb-workspace-182792.php
Microsoft IE, Proxies and Sandboxing
I switched from using Internet Explorer to Firefox about a year and a half ago, with no regrets. The only downside has been using sites that require ActiveX controls, such as Windows Update, Sharepoint intranet administration and sites that distribute software using Microsoft .Net "click-once" deployment. And damn these sites for forcing me to use IE! The reason to avoid IE (widely known) is poor security. Despite this, I know people who are still using it and think nothing of it. It makes increasing sense that open source solutions such as Firefox help with security. The source code is reviewed widely by people both inside and outside the organization, and vulnerabilities are identified and corrected regularly as a result.
Since I made the switch, my anti-spyware has shown a remarkable drop in my acquisition of spyware/malware while surfing. In fact, the only things I regularly see are the odd tracking cookie. Between good habits like not opening email attachments, setting mail readers (Windows-based and web-based) to not display graphics in HTML email, running a software as well as a hardware firewall, keeping my systems updated and fully patched and running anti-virus software (AVG free edition currently), I have happily avoided infection. And not just obvious infection, but any infection. People who may not know this should be aware that increasingly, malware can run without obvious symptoms while intercepting information you are entering in various places.
You can further lessen your exposure to bad things by installing your own proxy server, such as Proxomitron, which is installed and configured on your machine. By piping all web requests through this local HTTP filtering program and configuring it in various ways, you have another useful means of blocking bad content. The details of this would require an entire article, but their website is a good place to start.
And speaking of proxy servers, IE has or had (it may have been patched) another issue with using the auto-detect proxy settings option. Steve Gibson described this on a recent TWIT security podcast. Even without the vulnerability, you want to uncheck this option to avoid the performance hit it causes.
The other issue that concerns me is web surfing on machines that other people have access to. I would like to be able to buy something online and do things like log into my work email without leaving cached information on the machine's browser. The solution may be sandboxing, which Steve Gibson also recently discussed. Using a tool like Sandboxie, you can avoid leaving typical artifacts of browsing on the machine you're using. Sandboxie basically creates a virtual sandbox in which it runs any application (such as IE), and all caching is done in the sandbox. When you're finished, these files are deleted when the sandbox is shut down. That is brilliant.
Since I made the switch, my anti-spyware has shown a remarkable drop in my acquisition of spyware/malware while surfing. In fact, the only things I regularly see are the odd tracking cookie. Between good habits like not opening email attachments, setting mail readers (Windows-based and web-based) to not display graphics in HTML email, running a software as well as a hardware firewall, keeping my systems updated and fully patched and running anti-virus software (AVG free edition currently), I have happily avoided infection. And not just obvious infection, but any infection. People who may not know this should be aware that increasingly, malware can run without obvious symptoms while intercepting information you are entering in various places.
You can further lessen your exposure to bad things by installing your own proxy server, such as Proxomitron, which is installed and configured on your machine. By piping all web requests through this local HTTP filtering program and configuring it in various ways, you have another useful means of blocking bad content. The details of this would require an entire article, but their website is a good place to start.
And speaking of proxy servers, IE has or had (it may have been patched) another issue with using the auto-detect proxy settings option. Steve Gibson described this on a recent TWIT security podcast. Even without the vulnerability, you want to uncheck this option to avoid the performance hit it causes.
The other issue that concerns me is web surfing on machines that other people have access to. I would like to be able to buy something online and do things like log into my work email without leaving cached information on the machine's browser. The solution may be sandboxing, which Steve Gibson also recently discussed. Using a tool like Sandboxie, you can avoid leaving typical artifacts of browsing on the machine you're using. Sandboxie basically creates a virtual sandbox in which it runs any application (such as IE), and all caching is done in the sandbox. When you're finished, these files are deleted when the sandbox is shut down. That is brilliant.
01 November 2006
Null
I have adopted the practice of not allowing nulls in lookup tables and foreign keys to those lookups. The reasons for this are:
In fact, if it were not for a couple specific issues, I would extend my reasoning to say that nulls in any database table are bad. The DateTime issue is unfortunate, because there may be times when you legitimately do not have a date value and "1/1/1753" won't cut it as an indicator for that. Also, calculations on columns when data is legitimately not known really needs to allow nulls so that null data is excluded from the calculation. I can't readily think of other exceptions though. Something to ponder.
- Application logic is simplified for drop downs and the like that allow a blank value. For example, drop downs used on search screens to select from a finite and known list should allow a user to reset the selection by picking the blank row. Data access and business layer code also benefits greatly as anyone who has implemented typical "if (dRow("colName") != DBNull.Value) ..." knows.
- Query logic is simplified. Outer joins are not required.
- Query performance and indexing is faster because nulls are not allowed.
In fact, if it were not for a couple specific issues, I would extend my reasoning to say that nulls in any database table are bad. The DateTime issue is unfortunate, because there may be times when you legitimately do not have a date value and "1/1/1753" won't cut it as an indicator for that. Also, calculations on columns when data is legitimately not known really needs to allow nulls so that null data is excluded from the calculation. I can't readily think of other exceptions though. Something to ponder.
03 October 2006
Why You Should Script your Application's Database Changes
It's easier to not do this, so why do it? This argument seems to be alive, for some reason, more frequently than I thought. What if you don't script your database schema as well as data, each in separate scripts? What can you not do if you just keep one database that everyone mucks with, and then try to capture its changes when you deploy v.next? Here are some of the things you will not be able to do:
- Version control - your database structure, system data, and stored procedure and function logic constitute a bona fide layer of your application. You will not be able to track changes or easily roll back to a previous database version if you don't use version control of some sort.
- Associate a schema version with an application version - if you can version-stamp your schema and therefore identify it, then your application has a known-state it can check and know if it needs to degrade certain functionality, or upgrade itself, or terminate with a warning. The alternative (and unfortunately more common) approach is to just crash once a breaking schema change is encountered at runtime.
- Cleanly maintain separate copies of development, unit test, performance test, QA and actual data. Each of these are for completely different purposes. Yes, schema changes require changes to these scripts and so are a pain in the ass, but if you can't purpose your data to serve specific testing scenarios, you are not testing very thoroughly. And if you're not testing thoroughly and in an automated fashion, that is a bigger pain in the ass.
- Automate your build process completely - you can automate almost all of it, but if creating database change scripts is a manual, time-consuming process, you are defeating many of the benefits of continuous integration.
- Deploy the latest database changes to your local "sandbox" database server - why send emails to your team saying, "I'm about to make a breaking change to the database, so go get lunch or something and try back in an hour." Much better to allow anyone at any time to grab a database to-go and install it locally. You can test changes there, not crash everyone else if you screw something up, and still be able to go back to a known good state if needed.
- Deploy the latest database changes to the real world - the latest database is defined by your change scripts, and upgrading a client's database is relatively simple and can be tested in-house to iron out the small problems. Better this than waiting for Bob to come in and spend three hours scripting things manually and wondering if it's really good to go.
- Compare the effects of historical changes on performance - did that index really speed up your queries? Slow down your inserts? And who created that table and forgot to index that column?
- Search and replace column names - not always so easy, but if you have some standards, it can be pretty simple and is a powerful capability.
Recycle Old Cell Phones
This is a great idea from Motorola. You can send old cell phones and accessories, postage is free (print from their website), and schools can set up a fundraising thing with them.
How to Explain This II
Someone asked my opinion today, "how would you find duplicates in this table?" I asked him to clarify because that seemed straightforward. He ammended his question, "how would you find items here that are not in this list?" I was confused.
Draw a picture, I asked.
He drew a sample of the two sets of data and explained "For this entity, there are items 1, 2, and 4. For another entity, there are 2, 3, and 5. I want to find the missing items." He wanted to find the GAPS in SEQUENTIALLY NUMBERED sets. I wasn't getting that until he drew the picture.
This is (as I posted months ago) a frustration I have often. Sometimes it is due to my own confusion, just as often though it is because English doesn't cut it. Draw a picture instead.
Draw a picture, I asked.
He drew a sample of the two sets of data and explained "For this entity, there are items 1, 2, and 4. For another entity, there are 2, 3, and 5. I want to find the missing items." He wanted to find the GAPS in SEQUENTIALLY NUMBERED sets. I wasn't getting that until he drew the picture.
This is (as I posted months ago) a frustration I have often. Sometimes it is due to my own confusion, just as often though it is because English doesn't cut it. Draw a picture instead.
17 July 2006
Scope Change -- Worth It?
I recently heard an interview with Kent Alstad in which he commented something to this effect: "If a change in scope is proposed for a project, ask yourself this: will the project succeed if we do NOT do this? If the answer is yes, don't do it."
That's about as clean a way to judge whether to give in when scope begins to creep (as it always does) as I've ever heard. The fixed price guys need to mind this. The agile guys will have a nice counter to the whole premise of scope, I'm sure.
That's about as clean a way to judge whether to give in when scope begins to creep (as it always does) as I've ever heard. The fixed price guys need to mind this. The agile guys will have a nice counter to the whole premise of scope, I'm sure.
05 July 2006
Design Debt
This book describes this concept. You create design debt when you crank out a solution without regard to its design simply to meet a deadline. It works and is "done," but at some point in the future if the system lives on, you're going to be compelled to clean it up, i.e., pay the debt down. And it may not be on your terms when that time comes.
If the overall benefit is greater than the cost, you're winning. If not, what are you thinking?
If the overall benefit is greater than the cost, you're winning. If not, what are you thinking?
23 June 2006
Glossary
I have been big on the idea of creating a glossary of the terms and jargon that are embedded in a specific client's problem domain, and doing this very early in the project lifecycle. The justification is that these terms are often thrown around early and throughout a project, but their meanings are not all grasped by everyone until much later. Also, the people on the client's side often disagree among themselves about particular terms' definitions. This lack of clarity often results in subtle bugs in the system, or simply in expensive rework downstream. A quick search shows that some requirements methodologies (such as Steeltrace and maybe others) already encourage this. Putting it into practice seems like a lesser priority to some, but I think this is one of those little things with a big payoff that's hard to measure.
20 June 2006
I Run Free Software
Listening to my favorite podcast, lugradio, makes me feel like I've sold out to the evil empire. Then I realized, hey, I run free software! It's called Windows. I have never paid penny #1 for it in seventeen years. OK, maybe its cost is embedded in the price of the hardware I buy. Well, most of those in the rabid Linux community have paid that same price. They just took the extra step of not using it after paying for it, then installing something that is often unsupported by major hardware vendors and makes their lives much more difficult. Their goals are more noble than mine. Mine largely entail getting my job finished every day.
13 April 2006
Software Dev Future
The age of just writing custom code for most solutions is coming to a close. The companies who create the new standards and "platforms" (Google, Microsoft, open source groups, et. al.) will still need developers to cook applications from new code, but the bulk of the rest of us will be increasingly involved in 1) picking and choosing among the components we need that these standards makers offer, and 2) wiring together the services exposed by these platforms.
There seems to be a lot of resistance among the developers I talk to about this shift. I hope you're ready for this.
There seems to be a lot of resistance among the developers I talk to about this shift. I hope you're ready for this.
19 December 2005
23 November 2005
How to Explain This
Draw a picture.
If you think you're too smart to need to, you're wrong.
If you think Joe over there understands what you're saying, you're wrong. You should draw a picture.
If you think you understand Bob, you're wrong. He should draw a picture, write it down, look at you and ask, "you know what I'm saying?"
My basic rule is if a design requires me to stash things in little cubby holes in my brain while someone is explaining it, words alone won't cut it. Example: "a queue is going to hold 50 items, and it's implemented as a table, so we need a stored procedure to update the date if it's already there, otherwise we'll insert --" STOP. Draw a picture, man, PLEASE.
If you think you're too smart to need to, you're wrong.
If you think Joe over there understands what you're saying, you're wrong. You should draw a picture.
If you think you understand Bob, you're wrong. He should draw a picture, write it down, look at you and ask, "you know what I'm saying?"
My basic rule is if a design requires me to stash things in little cubby holes in my brain while someone is explaining it, words alone won't cut it. Example: "a queue is going to hold 50 items, and it's implemented as a table, so we need a stored procedure to update the date if it's already there, otherwise we'll insert --" STOP. Draw a picture, man, PLEASE.
22 October 2005
If God Raced a Motorcycle
He'd be Valentino Rossi. He wouldn't have to create the supreme motorcycle racing being, He already has. Rossi wraps up yet another consecutive world title when it seems the machine doesn't even matter (his Yamaha was and may still be inferior). The competition can't keep up. He passes people at will. What can you say? Sure, God probably wouldn't have dumped it by colliding with Melandri at Motegi (OK, definitely would not have), and God would have won at Laguna Seca and Malaysia somehow. BUT, do you think He could have pulled off that win in the pouring rain at Donnington? No, at least not any better, probably. Maybe.
I try to keep things focused on software development and related biz stuff, but Rossi is The Man. Surely there's a way to relate this to this blog's purpose, but since it's mystifying how he does it, it may forever be disconnected from any reality.
Forza Rossi!
I try to keep things focused on software development and related biz stuff, but Rossi is The Man. Surely there's a way to relate this to this blog's purpose, but since it's mystifying how he does it, it may forever be disconnected from any reality.
Forza Rossi!
Subscribe to:
Posts (Atom)