Archive for June, 2008

Turn-on, tune-in, and get ready to update!

June 28, 2008

…and were Live!

Ok, so it took me all day but I finally did it. As promised:

ZOMG! They all passed.

I know what your thinking. Only 69 tests, thats really not too many. I know, I know, but believe it or not it took me a long time to get those tests working 100%. I spent the better part of today refactoring and reworking the tests to try and get them all to pass. They are pretty basic but amazingly enough they enabled me to find some pretty big bugs which I had subsequently fixed.

I’ve been learning alot about C# and the whole visual studio team system suite lately. Stuff like proper assembly usage and common coding practices that I never really needed to delve into when using c# before. Though I must admit that some things that would take experienced Microsoft programmer seconds to do sometimes took minutes or hours since I had to look through various documentation to find the right line to add to fix everything.

The Unit Test Hour:

Unit testing is fun! Trying to break your own software is just cool. After spending countless hours trying to perfect a specific function or class its nice to just let loose and try to break it, ON PURPOSE! It’s also very satisfying to see all the little green (and red) icons light up after hours of black and blue coding; pun intended. I have no problem breaking my own code, it also usually gives me a good idea of what to focus on next.

After hours of trying various approaches to Unit Testing Web Code and coding some of the more basic unit tests for other classes I eventually realized that I could use the JAXB approach. Here’s what I did. I used the template classes that are used to de-serialize the incoming SOAP XML from TFS and instead created a method which, when presented with the appropriately filled class serialized the data. I then passed this “sample data” to the methods which handle the incoming TFS SOAP XML and added code to each “notify” method to return true or false based on whether the sensor could send data or not. Very JAXBish. The only problem with this method though was that the Unit Tests were located in another project and I couldn’t find a way to access the classes I needed without copying them over to the Unit Test project. VERY BAD PRACTICE. But for now it seems to work. If anyone can suggest a better way of referencing classes from another project under the same solution please, please, please email me. I think it has something to do with assemblies.

Another thing to note. You should always make sure to develop your unit tests as self sufficient tests. Assume no order to which the tests are run. I came across this problem when I was writing my tests. Early on I found that some of the tests which queried the database would fail when run the first time. On a second run however, some would fail and some would pass. Very strange. On investigation I found that this occurred because the first time they ran they modified the entries (or the database itself!) which then created a different test context the second time around. I think I’ve gotten it a bit more balanced now. I create an entirely new database for testing. After I am done testing I simply delete the newly created database. It’s slow and cumbersome but it works really well and I don’t think people will be testing on a production server so I think its a good compromise for developers.

In the know: Namespaces

Useful little things. At first I didn’t see a need for them but they really help to separate the structure of classes and code. I am currently trying to add namespaces to my classes so that I can see a logical grouping of code that belongs together in the object browser.

Documentation with Matt in the Morning:

I’m finding alot of problems that I am having is with documentation. Surprise, surprise. Though I am not quite having the problems you would think. Sure there is a lack of good documentation on lots of things but I think that is to be expected. The real problem I am having has to do with legacy documentation. I’ve been reading alot of documentation which ends up being the wrong documentation for the current version of the software I am using. Whether its the hackystat version 7.0 docs or the Visual Studio 2005/2008 documentation, people tend to eithier incorrectly label or provide very few labels as to which version of software the documentation pertains to. Then there is the problem of old documentation when the current archive is moved to a new location, updated, and the developers fail to remove the old documentation. I think a critically overlooked problem with documentation for developers is that they must keep all documentation for each version clearly labeled, sorted, separated and in one place. This is probably one of the reasons Wikipedia does so well. Its all organized in one place and its extremely difficult to find old versions of wiki pages. Sometimes its actually a good idea to either update the documents or simply delete them entirely to force people to find the new ones. Archiving like pack-rats does not a good developer make.

…and he rode into the courtyard, at sunset, with only seconds to spare: GUI

Well its not that good but I spent some time this week redeveloping the structure of the GUI.

Its all GUI...

Its a bit bland and has many css errors but it looks better than before. I will probably change it a few more times before the end of the project. Suggestions are always welcome.

Whats next

Now I plan to do some more advanced things. Besides better unit tests I am going to delve into the Team Fortress, I mean Foundation, Server and see what data I can pull in addition to what is given when an event occurs. I should be able to pull at least a little more data. I was also thinking of comparing changesets from source control when a new changeset is checked in to give a better idea of what exactly developers are checking in.

B.T.W. This blog GUI really makes my post look longer than it really is.

B.T.W.x2 Anyone notice how most programs are adding in instant spelling support with little wavy red lines right under our noes. I just used TortiseSVN today and when I spelt something wrong it suggested the correct spelling.

Advertisement

House keeping

June 23, 2008

Today I did some very overdue house-cleaning as well as setup the unit-test infrastructure. First off I reinstalled VMware with a new version in hopes it would provide better speed. It did. Next I installed (finished installing) Visual Studio Team Systems 2008 and its corresponding Team Center add-in so I could access the Team Foundation Server and the project it was housing. I then set to work creating a new c# test project which will be used to create unit tests for the system.

Then the trouble began. Source Control is a fickle thing, and it seems as though all of the updates, renaming, moving, offline deleting and general mucking about with forces well beyond my comprehension had confused the Team Foundation Source Control Server so badly that it wouldnt add or delete files I had changed. I did the only thing I could do which was to backup the hard-files I had been working on and delete the project. (Don’t worry all, I have a second version on SVN at Main Repository ). I then created an entirely new Source Control project, added all the old files, converted the project to .NET 3.5, created a new unit test sub-project and then commited everything to both the Main Repository and the Backup. Now I know what not to do in the future.

I feel as though I have been lacking with the substantial updates and pictures as of late. Therefore I hope to post a huge update this Friday.

The Good, The Bad, The …

June 20, 2008

The week started off very well. I was able to finish off implementing sensors for all of the Event Services. Unfortunately, I just learned that I may have done things a tad incorrectly as I was using the specifications for Hackystat version 7 instead of version 8. Not a big problem as it should only require a small change to correct. I was also able to fix smaller connection issues which resulted in the data being kept offline even when it should have been sent to the Hackystat server. It seems I had been overriding the “Owner” field which is used to validate that the data is being sent from a user that exists in the hackystat database. Replacing this with the owner who sent the data from the VSTFS database made Hackystat get confused about who was sending what.

I’ve been encountering some other strange connection issues. Some data seems to be getting lost somewhere. If I am offline the data correctly gets stored in niftly little xml files by the sensorshell and when I next go online they get sent out fairly quickly. Though when I am already online some of the data does not always get transmitted through.

My hypothesis:

a. Data is being lost because its formated incorrectly. Easy to test, easy to fix.

b. Data is being lost because of the current server setup. Not so easy to fix.

I suspect it may be more of B than A. It may be that because the data must fly through the sensorshell, then through the virtual machines “virtual network card” to the host computers real network card to the network then into the server (which is running quite slow), it may be that the server just isn’t getting the data and since my side (client) technically sent the data its not getting stored offline either. In any case I now have a critical priority ticket to fix the problem. Thats goal number 1 for next week.

A second problem I had was that my version of Visual Studio didn’t have the test suite tools add-in installed on it which meant I couldn’t write any proper unit tests this week. Luckily I obtained the Visual Studio 2008 Team Suite through MSDNAA and have installed it so that I can now start writing some tests (YAY!).

Another piece of good news is that Greg Wilson has pointed me to Jean-Luc David who has written two books on TFS (Team foundation Server) and VSTS (Visual Studio Team Systems) and has graciously offered to answer some of my more pressing questions about them. Hopefully this means next week I will be able to ramp up really quickly and get version 1.0 of the sensor out the door.

By the by: I plan to have version 3.0 done by summers end. (everyone knows that 3.0 is always the best version 😉 )

IT LIVES!

June 16, 2008

Today was a pretty big day. I spent part of the day (lets say the morning) trying to find out what caused some of the more esoteric events that people can subscribe to with VSTFS. These events include:

  • Any of the Node******* events
  • Any of the Identity******* events
  • The common structure change event
  • Data change event

Now that I have a good idea of what these events are and when they occur I was able to simulate their occurrences and make the appropriate additions to the code to allow for them to be handled by the web service. This means that my web service now crudely supports all 17 built-in VSTFS events and sends the appropriate information to the local UofT Hackystat server (I plan to test this later in the week… but theoretically it should work :p ).

As I am sure the names above are as confusing to you as they were to me the first time I saw them I am putting up documentation about each event and how you can trigger them on my wiki. I don’t have them all up yet but I plan to by weeks end. If your really interested in the project I highly suggest taking a look. The descriptions for each are short and each includes a small screenshot indicating how I went about triggering the event.

My plan for the rest of the week is now three-fold. I must begin to write-up the wiki documentation on what I have done so far, I need to create tests for each of the events that can be sent to the web service and I must start investigating extracting more information than is provided from the web service from VSTFS.

P.S. Anyone remember what step I am supposed to be on at this point in the project?

Steady as she goes

June 13, 2008

Its been a particularity normal week. I am continually going through mounds and mounds of documentation just to find information on the simplest things. For example, the supported events are very poorly documented. No one source lists all the events and what causes each event to trigger. I plan to fix that in the coming weeks. Despite this I was able to get the sensor tool to record some of the larger types of data that are strictly (I hope to go beyond the event system) given by VSTFS’ event system:

  • Additional Source Control changes (Branching, moving)
  • Build Systems
  • Work Items

There were/are also some connection issues when it comes to the sensorshell (the little guy that sends all the data to the hackystat server). I was able to restructure the web app so that only one instance of the sensorshell was created at a time 🙂 . This means that data can be quickly sent to the server or stored offline without the need to restart the service on every event. The problem with this approach though is that if the server crashes or is forcefully reset/stopped without making sure all sessions have been closed properly the child process (sensorshell) gets abandoned and outputs a text file like this.

The SERVER!!!!!!!!!!!!!!!

… forever!!! Did I mention that it also takes up 100% of the CPU. For now this means that every time I run the development version I must forcefully kill the sensorshell process. I believe it has something to do with the instantiation of the sensorshell. I am currently using some of the code from the Visual Studio Sensor to initiate the shell and I may look into how it actually does this to see if this could possibly be the problem. It may also have something to do with multisensorshell. I will have to pour over the Hackystat docs to see exactly how this works.

Some other small fixes. I fixed the logging so that it now outputs to daily files and appends to the end of each log with a nice timestamp. Its a simple fix but it really helps. I finished the settings module. It will not look for the VSTFS server, create the Hackystat database/catalog if not already created, create the settings table and store the appropriate settings. Useful for anyone who wants to start testing this sensor right away. I have also made some minor changes to the wiki page, given myself a load of tickets and overhauled the subversion repository.

Plan of attack for next week is to continue implementing event detectors and begin writing some unit tests. I foresee problems with the whole unit-test situation as there are so many independent systems that if even one of them errors then the test will fail.

Source and Wiki

June 12, 2008

A new url for everyone who wants to keep up-to-date:

Hackystat Wiki and SVN

The SVN should be readable so people can checkout a read-only copy and hopefully compile for there own use.

Busy week

June 6, 2008

It’s been a very busy week. Aside from my own personal life I was able to get some data sent all the way from VSTFS to the development Hackystat database we have setup at UofT. It uses the pre-existing sensorshell to send data which gives the benefit of offline storage and product longetivity with a con of being slower than just making my own. Production has been good and the sensor now sends alot more data about version control than is shown in the screenshot below. I hope to utilize as much of the data as is provided by VSTFS event system to send to Hackystat and maybe even some that isn’t 😉 . Right now it easily sends the equivalent amount of information as the SVN sensor. I shall provide screenshots next week.

In case you were still wondering what VSTFS is exactly, its basically some super-powered team management software and includes all of the tools needed to facilitate team coordination and planning. Its like the corporate edition of DrProject. Right now the tool senses version control info but has the ability to sense more I just need to decide what I want to send and what is overhead info.

One of the most frustrating things about developing a sensor for such a large and requirements dependent system is the setup. Thats why I plan to make the setup and configuration of this sensor as easy and foolproof as possible. On that note, I am done about 70% of the initial settings infrastructure interface that will allow users of this very large sensor to quickly and easily configure critical sensor properties and store them in the (required) running MSSQL Server that stores the other various VSTFS information. Since all settings need to be stored in an sql database there would need to be some heavy security authentication going on if the user wishes to access the sensor properties outside the local system. Therefore I decided that at the moment it can only be run and changed from the computer taht is running VSTFS. I hope to change that in the future… sometime… maybe… we will see.

I am finally at that point in the project where you (and possibly your team) have built up a codebase large enough that you can actually see components working together and really begin to see how the system is functioning and what needs improvement. Its been a bit slow and I’ve had to brush up on my SQL (especially for database creation) as well as SOAP and XML. With a good foundation now its hard to choose from the tons of cool ideas I have on how to extract more data and how to run things more efficiently. Overall its proving to be a really good learning experience.

First data sent!

June 3, 2008

Today the service sent the first data ever. The data was very simple but its still cool that it works.

First ever VSTFS data sent.

Don’t get to happy yet though. The code is all over the place and a ton of features, methods, properties, functions, etc are missing or need heavy refactoring. It definitely helped that there was already a wrapper class for the visual studio plugin that I could use. It may be possible to build further upon the existing code to make it easier for other people developing sensors in c#.