ASSOCIATES (vol. 2, no. 2, November 1995) -

Table of Contents



                       Selection by Numbers


                           Rosalind Coote
                    Project Team Member/DBA
                   Treasury Information Centre
                          The Treasury
                     Wellington, New Zealand


In a previous article for _ASSOCIATES_, John Lozowsky described
the creation process for the RFP document.  In the spirit of
continuing the story, I will take up the part where we selected
the product and supplier we would pilot.  Rest assured the story
does have a happy ending; as I write this we have 163 real users
trained on the database we finally set up!

I should have run a mile when I got a folder two inches thick
given to me on December 24th 1993 and told to read it through
over the holiday!  This tome was the supplier responses to the
Treasury's Request for Proposal (RFP).

All members of the evaluation team met again in January 1994.
The people involved were: three members of the Information
Services team including myself; the network administrator; and
three representatives of the user community, Mary Anne, Peter and
Jonathan.  Remember those names - they are important!  In terms
of technical and user viewpoints we had a even split really
because at that time I had little technical knowledge but had
used a lot of databases.  My role until then had been liaising
with users to prioritise their needs for the new system.

We were evaluating the five tenders that made it through the
first cut.  Information Management had already rejected two for
not complying with the platform (Unix based solutions were out),
or clearly inadequate security (being a government agency with
responsibility for economic advice this is a top consideration).

The first step for the evaluation team was to establish how we
wanted to weight the responses.  This is like saying - in which
areas is the right answer worth more?  After a lot of discussion
we came up with:

50% for Functionality    (how many things we required could be
20% for Support          (how well could the supplier support the
20% for Vendor           (questions asked to verify quality of
10% for Technical        (how well did the tender meet
                          technical requirements)

The Technical weighting seems low - but took into account that
this covered things like backups and drivers, not what the
product actually supplied.  The Support and Vendor seems high,
but for Treasury we viewed it as a high risk area.  Being based
in New Zealand isolation from the vendor causes real problems
which we had encountered the sharp end of with our previous
database.  Our site does a lot of customization work on anything
we put in, so we were also looking for a company with strong
client focus and preferably New Zealand based.

For each question on the database tender the group agreed on a
mark for each supplier.  There were 166 questions times 4 vendors
to get through.  Each question could get a mark of  10, 5 or 0.

        10      = product fully meets our requirement
         5      = product meets our requirement half-way
         0      = product cant meet requirement at all

Suppliers write tenders to make it read like their product can do
anything and everything so we looked at answers with a sceptical

The other thing we did was to adjust marks according to the
priority users gave them.  The priority was given a weighting out
of ten. Priority as percentage of mark = final mark

        10      = Mandatory
        6       = Highly desirable
        5       = Desirable
        2       = Nice to have

For example :

Requirement.  The system should be able to track updates and
amendments to documents. (Desirable)

A mark of 10 is downgraded to 5 because the requirement was rated
as desirable although it was fully met by the tender.

There are a lot of different ways to do the numbers when looking
a tender.  This is just one way, chosen with the help of
evaluation team members Mary-Anne, Peter and Jonathan.  They
kept the quality of our process very high indeed.  They were
aiming for a method that would stand up to audit analysis
(should it transpire!) and applied the techniques of their job
(economic analyst) to ensure a sound process was used.  I'd
recommend getting number-crunchers types on your evaluation team!

One of the questions (Vendor quality) wasn't marked in quite such
a rigorous way.  As a team we decided this was a judgment call
type question and we just gave the answers an overall mark eg.
how long has the supplier been in business, how many qualified
staff employed.

End result of the evaluation was (1) a spreadsheet documenting
how each product stacked up and (2) a team decision that only two
products were worth seeing demonstrated.

Demo Time!!

The two suppliers had about a month to get a demonstration
together.  The reason we gave them this long is because (1) we
wanted them to use sample data from our current system and
therefore they needed to convert it and (2) we were telling them
the sequence of what we wanted to see in the demo.  The demo
script I wrote was the same for each supplier.  One supplier
(Supplier B) took this very seriously indeed and hired in a
overseas consultant just to set up the demo database and give the
demonstration.   This person visited me, looked at the current
system and got an idea of what we were aiming for (we thought
that was quite ethical as the other supplier - Supplier A - had
worked with our database for 5 years already).  Needless to say,
Supplier B's demo was the best by far.

The demos were two hours long.  The first hour was to show how
the users would access the system.  The users at the demo
(Jonathan, Mary Anne and Peter) would leave after the first hour.

Second hour was for the technical staff to see the DB
administration functions and ask technical questions.  The script
we supplied was useful for checking our needs against - eg. if
they glossed over printing we could say "show us how you print
the hit list again".

Having spent about 6 months working up to seeing a product
demonstration we were amazed to find a product that impressed us
mainly because of it's simplicity and "up-market" interface.  The
users knew straight away that this was the product they wanted us
to pilot and we were happy to agree because of the evaluation
process results and demonstration they had been able to provide.

The we checked our scores again...

A useful exercise we did was to go back and adjust our scores (up
and down!) after seeing the demos.  This wound up the evaluation
process nicely.

Reference Checks

As a final recommendation, reference checks can provide some
fascinating information.  We rang sites in the States and UK
that had been set up for a while and got some useful information
about what it was like to put in the product.

If there are some competing suppliers for the product this is a
good time to find out how good they are and what their support
might be like !  Keeps your supplier on their toes...


. have some users as members of the evaluation team
. agree on weighting for different sections of the tender
. you can also weight answers by how users prioritised it
. send the supplier a demonstration script that they have to

. reconfirm scores after seeing the demos
. check references by asking other sites some set questions