TESTIE's intelligence is human-like

When it comes to real usability of a software product, there should be no compromises. Yet we all know from experience that software is very often missing some basic, very important element, which becomes obvious when it's behaviour differs from our expectations. The problem is exceptionally prominent in today's higher-order software, that performs complicated business intelligence tasks, because as the complexity rises, so does the potential for unwanted side effects.

That higher-level essence that is so often lacking is intelligent behaviour and it's exactly this ability that makes TESTIE so special. You can be sure she always does exactly what she had been asked for - even in the case of most complicated testing tasks.

Let's have a look at a practical example and say you want to test your candidates for some specific skills in Microsoft Word and Outlook. So you create a custom test combining Word & Outlook elements and let the candidates start it - so far so good.

One of the tasks in Word portion of this test might for example read:

On page 4 edit the contents of the footnote to be "HRM means Human Resources Management".
Now what if a candidate makes a typing mistake and his/her solution to this task will be footnote with "HRM measn Human rsources managemet"?

Obviously, as the purpose of this task is to test the ability to add and edit footnotes and not typing skills - so typing errors should have no influence over task result in this case. On the other side, the testing system can not be too much tolerant, to prevent task solutions from being vague. Correct solution to this type of problem is therefore to find the right balance between tolerance and restriction - in this case it is the percentage of how many characters of original text can be mistyped. So when a candidate solves the task but makes less than 20% typing errors along, the solution will still be marked as correct

This was just a very simple example and TESTIE is smart enough to recognize these kind of "irrelevant" errors in the process of task analysis and tolerates them. Even more - she tolerates every user action that has no relation to meaning of given task; only the valuable information is taken further into analysis. Let's demonstrate this further:

Subsequent task in the Outlook portion of the example test could read:

Create a new e-mail and set the recipients to be jmason@testie.net and pwayne@testie.net.
Subject: New employee
Message text: I'm sending you the contact details of our new employee. Yours sincerely (your name)
Attach the contact Anderson, Lisa from the contacts and send the e-mail.
This task consists of several steps and each of them can be correct or wrong. Process of analysing the task result therefore checks for presence of each step and then evaluates them.

But let's bring this to a really higher level. We do not want any typing-errors tolerance for the recipients' e-mails as the correctness of them is required in real world for the mail to successfully arrive. On the other hand, we should allow tolerance for at least message body text. So how would a human expert evaluate this task?
  • Search in outbox / sent mail items for an e-mail with recipients jmason@testie.net and/or pwayne@testie.net
    • If such e-mail does not exist, then the whole task was not completed successfully.
    • If it does exist and the recipient list is complete, than we mark this step as correct and continue.
    • If it does exist and the recipient list is not complete (for example it was sent only to jmason), then we will evaluate the other steps and according to them we can then choose to tolerate this mistake or not.
  • Check if the e-mail has the correct subject
    • If there is no subject at all, it means that the candidate either forgot or didn't know how to write one - in both of these cases, this step will be marked as incorrect.
    • If there is text in the subject that is identical to or approximately matches the original "New employee" subject, then we can mark this step as correct.
  • Check if e-mail body contains correct text
    • Same principle as above applies also here:
    • If there is no text in body at all, this step will be marked as incorrect.
    • If there is text that is identical to or approximately matches the original text, we mark this step as correct.
  • And then, tha last one: check if the analysed e-mail has the contact "Anderson, Lisa" attached
    • If there is no contact attached at all, this step will be marked as incorrect.
    • If there is contact attached, but it is not the right one, we mark this step also as incorrect.
    • only If there is a right contact attached, do we mark this step as correct.
    • note that here we could have been more tolerate with right or wrong contact attachment, but this is the expert-level task and so this step should be made perfectly. We can however choose to afford more tolerance In a standard-level tasks.

So the total achieved points for this task are based on partial results for each step and then combined into final result using smart human-like logic. TESTIE takes the same approach as human expert would and focuses on the "big picture". You will be suprised when you experience her testing abilities in action for yourself.

Check out the free demo or choose from variety of testing / training packages.