Thursday, May 28, 2009

A RADical Departure from the V-Model

One of the hats I wear these days is that of AE Safety Report Designer / Developer. In this role, I take the user requirements for the reports, mock them up and code them in whichever reporting tool is appropriate. This is fairly standard stuff, but I’ve found that moving away from my beloved V-model methodology for small activities pays big rewards. I’m talking about an iterative design/prototype/build loop that much more closely resembles Rapid Application Development (RAD). Is this a deviation from my beloved “Flying V”? Not really. This is just a unique approach to gathering and locking down user requirements that happens to work very well in report design. I use the final prototype as part of the design documentation, and the whole process is wrapped in the overall traceability of the V Methodology.


We start the process off with a standardized request form that the users / requestors will fill out to capture the basic business functionality and data requirements. I take this form and create a SQL version of the report (using TOAD) and export the results into a pre-formatted Excel template. I’ll adjust the formatting of the spreadsheet to get a preliminary “look and feel”, and send it off the requestor for feedback. They are responsible for making sure that the prototype meets their expectations, that the data is as they expect it, and that the layout works for them. Their feedback is incorporated into a subsequent prototype. Sometimes, it makes sense to actually sit down side by side with the requestor in the beginning to understand what they need, especially if the report is not of the standard AE Listing or Summary Tabulation variety. The end result is a set of SQL code and a report mockup in a spreadsheet that can be used when developing the report in whichever robust reporting tool is used to deliver these standardized, user-driven reports, such as Cognos or Business Objects. If you are wondering about the waves in the middle of the graphic, those are harmonic waves, hopefully spreading their soothing harmony to all involved in the process.

Tuesday, May 19, 2009

Picture of Wisdom

Pictures convey a thousand words. But sometimes, the words may be in another language. For example, there is an article about knowledge management that describes the progression of data to information, knowledge, and wisdom that I liked. Here’s the URL: Data, Information, Knowledge, and Wisdom

Now, I figured that these concepts could be useful in describing the value of a business intelligence system to a group of business users (in this case, Medical, Clinical, and Regulatory folks.) After all, they know what data is, and they have systems that store information about each of their respective domains. They even shared some information in a few reports. But I was trying to convince them that we could use the data and information they had and go even further. I was hoping to build out some predictive analysis and pattern recognition dashboards to take their current level of Business Intelligence from “dimwit” to “genius” level.

But the picture in the article that caught my attention was not going to convey anything to this group. Here’s my version of that picture:



This picture was theoretical concept only. I needed to tie it to some systems architecture. So I imposed some familiar terms and concepts to lock the idea down. I ended up with a flow that was familiar, yet was able to convey the need to continue along the spectrum and give me some talking points about my vision for this new BI system. Here is my new, improved picture.


(can't read the picture? click on it for full size.)

Well, it worked. I was able to discuss possibilities in both visionary and pragmatic terms, pointing to points of the picture and describing algorithms, graphics, alerts and all the other cool BI stuff. It was much better than trying to do so pointing to a box that said “wisdom”. So, thanks to the guys at Systems Thinking for get me thinking about the value chain of BI in something other than servers, clients, and code.

Friday, May 15, 2009

Data Warehousing Best Practices

Data warehousing is not a new technology by any means. But it is one of those more intimidating subjects to many folks that don't have the experience or exposure to them. I've spent the past several years working on various data warehouses and data marts in the Life Science's industry, and have a developed a few tips and techniques to avoid some of the most common problems. Lets face it, Data Warehouses have a higher than normal rate of cost overruns and even project failure rates than other systems. This is largely due to the fact that they are complicated projects with several focused and specialized teams that aren't always communicating well. So I've put together a list of these "best practices" and my company, Intrasphere, published them in a white paper. Which makes it a prime candidate for sharing here.

Now, if you are a data warehouse guru, and you are looking for bleeding edge approaches that will shake the earth as you read, don't bother. This white paper is written for project managers and IT managers that may not have tons of data warehousing experience, but are looking for some pragmatic and practical advice they can both understand and use.

Data Warehouse Best Practices

As always, I appreciate your feedback and comments.

Wednesday, May 13, 2009

Administrivia - About Phar-B-IT

Thanks for checking out my new blog. I wanted to take a moment to "set a few expectations." Basically, I want to let you know what to expect and to ask for your feedback.

I started this blog for two reasons. The first was because I've spent quite a lot of time in my job searching the internet for templates, utilities, and advice to help me do my job. Unfortunately, I wasn't always successfull. So, I figured I wasn't the only one frustrated by this, and decided to share what I can.

The other reason is to network a bit. I'm hoping that if you're reading this blog, you'll learn a little bit more about me. Which brings me to my other request, the one for feedback. I'd like to learn a little bit more about all the folks that represent the dots on the ClustrMap over there on the left. So, please leave comments. Let me know which dot you are (where are you?) and what interests you. Let me know if there is something that you want my opinion on, or if you even disagree with me (you won't be the first!)

Now, I'll try to post as often as I can, and I'll certainly check the comments out daily. My target right now is to post about 2 - 3 blogs a week. I'll try to include some helpful templates at least 2 - 3 times a month in order to build up a nice little online library. And lastly, I'll try to keep the puns and bad jokes to a minimum, although this is probably the first promise I'll break.

Tuesday, May 5, 2009

Validation with a Flying V


When I was young, I liked my music and my technology new and cutting edge. But, like all things, as I grew older, I watched what was cutting edge become contemporary, and then contemporary become classic. Take Jimi Hendrix, for example. Here’s a picture of him playing guitar. When he started out, he was breaking new ground with his music. Then, he became main stream, and everyone wanted to be like Jimi. Nowadays, he’s on the Classic Rock and Oldies stations. Like the guitar he’s playing here. The Flying V was leap away from the classic guitar design. It was cutting edge, then became the norm for bands that wanted to project a cutting edge image. Now, like Jimi, the Flying V is a classic oldie.

And it makes perfect sense to me that my favorite SDLC (software development life cycle) methodology follows the same pattern. The V-model started as an innovative approach, breaking away from the norm of the traditional waterfall model. And, of course, it didn’t hurt that is easily associated with the coolest looking guitar the rock world has ever known (with the exception of Gene Simmons’ Axe, that is.)

As for why I like it so much, well, it’s all about the validation. I feel the V-model is the best methodology to support the typical validation approach. QA groups around the industry understand it, and like it. So do system designers, analysts and developers. And anytime a project manager can get both of those groups to agree, they’d be unwise not go along for the ride.


The idea is quite simple, create a matrix based mapping for each requirement, and trace the requirements through design, build and test in your documentation. My flying V methodology here outlines the traceability. The Unit Test will ensure the coded modules work according to the specifications. The OQ should reference the designed features and functions being tested, and ensure the system works as it was designed. And the PQ will ensure that the business requirements are correct and that the users can actually meet those requirements using the system.

Anyway, this has been my favorite methodology approach to validated systems over the past 10 years. Which for some reason is making me feel a little old today.

Saturday, May 2, 2009

Keep it Simple, They Aren't Stupid

System Architects like to draw pictures. Pictures often convey technical concepts much more clearly than words. This is because pictures have a way of dumbing down the details of system architectures to a level of understanding of most non-techies. This may sound a bit patronizing or pompous, but it's actually the opposite. If system architects spoke in clear language without overindulging in jargon, we wouldn't need such devices as graphical diagrams to get our meaning across.

I find that when I stop trying to impress my fellow techies by speaking in Acronymonish or Comptecheze, the business folks actually listen. And when they listen, they understand better the proposed approaches and or issues I'm trying to communicate. And when they understand me, they are in a much better position to trust and agree with me. Win-win, right? I think so.

I'm extremely lucky to have a straight talker as a mentor and role model (my dad, of course.) He is a civil engineer, and I always liked his no-bullshit approach to discussing problems and solutions with his construction clients. Instead of trying to make himself seem smarter by confusing the non-engineers, he would make them feel smarter by getting them to understand engineering concepts. And I've never met anyone that didn't respect that about him.

So, in order to help me get my ideas across to folks that don't have a technical background, I use props. A good diagram here, a carefully articulated metaphor there can make even the most computer-shy executive understand the concepts that may impact his projects.

Here is a real world example:



This is a very simple overview of the system integration between the various components of an Argus Insight Safety Data Mart. It shows the Oracle Database tier, the Cognos Reporting tier, and the Insight UI tier. The overlaps are the integration points, which is really the focus of this diagram. The key is to keep it simple, cover only the points you want, and leave the unnecessary details off the page. In this diagram, I was interested in showing how Insight leveraged the reports through logical groupings based on the Cognos packages. I also wanted to show how the table joins were Cognos constructs and not stored in Oracle. Finally, the red arrow was one of the key points: Insight queries the database when creating Case Series, and does not go through Cognos when doing so.

The why for the document was to set groundwork for explaining our recommendations for enhancing the reporting system, which is in the diagram still in my toolbox. Hey, gotta keep some things for the paying clients, ya know. ;)


This approach was much better received by the business than one used by another technician I know. The man had an ego the size of a Manhattan skyscraper, and never passed up an opportunity to drop big, scary techisms whenever he's with anyone. He thought if he sounded important enough, people would just defer to his wise and valuable judgement. Nothing was further from the truth. Inevitably, he insulted and angered the users and business folks, who stopped listening to him, and opposed his ideas just out of spite. Whenever he'd diagram out a system, he tried to throw as much detail as he could into it and it would become unreadable. He justified it by saying he wanted to be as thorough as possible. But this is just silly. A diagram will never truly represent a computer system in the level of detail he was attempting. Diagrams work only when they are simple and easy to read. They fail miserably when they aren't.


I have to admit, if you haven't guessed by now, this guy drove me nuts. But he was actually very important to my growth as a systems architect. By watching how business users reacted to him, I was able to learn how to avoid speaking down to them, and how to keep my thoughts, words and diagrams clear, simple, and to the point. And I've cashed in on this lesson time and again.

Wednesday, April 29, 2009

Capacity Planning Template

One of the templates in my toolkit that gets used frequently is my Capacity Planning template. This is either used as a stand alone document or integrated into a larger Architecture document. The goal of this document is to get the system designers to spend a little time thinking about the future. Many times, architects and system integrators are focused on solving today's problems and meeting today's requirements. A capacity and growth plan is something that makes sure the future is also considered.

Something to keep in mind, I very rarely use any of my templates "as is". They are intended as a starting point for both organizing an idea, and helping me kick start it. As they should be. So, to get the document, either click on the title of the article, or here: http://docs.google.com/Doc?id=d4rxkwb_0c3wt5dhc

To download, click on File menu item, and select Download file as (I uploaded a word document, but you can choose whatever format floats that boat.)

Tuesday, April 28, 2009

Drug Safety Systems Survey

I developed a simple survey in surveymonkey. Mostly to kick the tires of a new website, but I did choose a subject I'm interested in. I'd appreciate it if you check it out and answer a few questions. http://www.surveymonkey.com/s.aspx?sm=fCwCKv5CDVyk6GasFHxaSw_3d_3d

Thanks in advance (because I can't do so on the free surveymonkey account I created. :)

The Right Tool

I've been working as a systems integration and technology consultant for 15 years now. Prior to that, I worked in the construction industry for 10 years. Surprisingly, I think there are many skills and approaches to problem solving that my previous vocation has given me that are very effective in my current occupation.

One thing I learned early on was the value of having the right tool for the job at hand. If you have ever tried to use a hammer on a screw you know what I mean. Of course, another thing I learned was how to make do with the tools you have at hand, even if it’s not the perfect tool for the job. In a perfect world, every time I needed a new tool, I’d just buy it. But that wasn’t practical back then, and it isn't now either. The real world dictates that I buy the tools I use the most, and make do with the ones I have for the odd jobs that require rare or expensive tools.

How does this translate to my current job? I’m currently working on a Drug Safety Reporting system, designing standardized and validated reports that the users will run on demand. We’re using the right tools (Cognos/Oracle in this case), gathering requirements, designing the look and feel, engaging the users and business experts to make sure we get the reports right. We’re also engaging the QA and validation teams to make sure our approach to design, development and testing is blessed by them. We’re slowly but surely getting the reports into the hands of the users that need them. This process is the right approach for the reports that will be used by groups of users time and time again. Monthly listings and summaries, annual and semi annual regulatory reports, and operational metric reports all fit into this category.

But what about the one offs? The reports that need run once? Sure, we can try to cover some of these with careful design of some broad requirements and push these through the process, but it still doesn’t fit 100% of the business needs. Nor does it make practical sense to dedicate the cost of the resources required to answer a simple question that a user may have about the data.

We could go out and implement a validated, fully-powered, 20 horsepower Ad Hoc query tool for a significant chunk of $$ (like Cognos!) Cognos would perfectly fit the requirement, but the cost of the effort of configuring the software, training users in using the Ad Hoc functionality, as well as the risk of them from shooting themselves in the foot due to lack of expertise in the data model is enough for my client to nix this. Hey! Wait! Didn't you just read above that we are already using Cognos? Yes, but we are using a "COTS" prepackaged set of frameworks and packages, something we can't change (invalidates the COTS support contract) and that has been deemed too "complicated and dangerous" for the average users. It's fine for the report developers (who are experts in this COTS), but not for the everyday user.

Instead, we could look into our toolbox and figure out how to use what we already have.
Like TOAD. We are already meeting these requests by having a small, core group of database experts write SQL in TOAD and export the data into either a spreadsheet or PDF. The challenge is to figure out how to use this tool in a way that can be validated and deliver quality reports, while keeping down the costs. Not impossible, and the validation effort is actually similar to implementing a new, off the shelf tool, but at much lower costs.

The key is to come up with an approach to developing the SQL, running and testing it, and delivering it to the users in a way that is approved by the QA group. The plan is to fully document all the fields commonly used, the tables and joins between the tables, and any special filters or business rules (such as determining relatedness via causality.) Basically, it's a documented brain dump of the aforementioned data experts. Then the approach is to take this coding standard and data dictionary and implement it with processes for requesting and delivering the reports, as well as templates that document requirements, SQL queries, and results for transparency. The result is that reports are delivered in hours rather than weeks it would take to code a standard report. And if any of these ad hoc reports is required on a recurring basis, the requirements are already fully documented and easily passed along to the standard report development team.

The bottom line is this: It's important to use the right tool for the job, but it's not always practical. As a consultant, I've learned to balance quality with pragmatism.