Roll out plan for telematics part 2(3)
June 5, 2018
Developing products and solutions
Depending on supplier, it can vary what type of processes is used to deliver work packages that will finally end up in a product or a solution, but they are often is quite similar. For development purposes a process like SCRUM will be used, and for maintenance purposes, typically a process like KanBan is used. All work packages are designed, time estimated and planned in the matching process. The request or ticket is indicated by the Project Manager. To larger projects a Technical lead can be appointed to keep track on all the technical activities and decisions.
Tests
FAT Factory Acceptance Test
Acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. This is done by the system supplier, to make sure that the delivery meets the requirements agreed upon in the project specification. When the tests are completed, you as purchaser get to take part of the test protocol. The FAT tests are done either in a test bench or, if necessary, in a vehicle. In some cases, the FAT has to be done at your site. For example if it is a part of a bigger system that is going to be delivered, and some sort of third-party dependence is at hand. Then you decide in the project plan setup, how the test situation is going to be handled.
The FAT can consist of two different stages. FAT1 and FAT2, where FAT1 in that case is done in a test bench and FAT2 is done in a vehicle. Sometimes the FAT tests results in modifications. In this case a new FAT test is often done after the modifications.
Site Acceptance Test (SAT) (or Field Trial)
If there is a SAT or a selection of depots selected for a limited field trial, the selection of vehicles should be made by the system supplier in close cooperation with the purchaser. The selection should be based on vehicle manufacturer, type, production year, engine type, FMS version or other CAN connection and chassi. It is important to choose vehicles as representative of the general operation as possible. The SAT is, contrary to the FAT done by you as the purchaser. This is in your own interest, that you get to test the system thoroughly.
Just like in the case with FAT, SAT can also consists of two different parts.
SAT1 is done in a test bench. The tests are of two kinds. System tests, where you test the system itself, and integration tests, where you test integration with other systems. Every time you change a system or part of a system, integration tests should be made, to avoid expensive showstoppers in operation.
SAT2 is done in the field, in a few vehicles, to evaluate that the system works out in the field.
A warning is in place here. It is easy to neglect the follow up of the tests, because it can be time consuming to go through several hundred megabit of information to try to make sure that the system functioned the way it should. Therefore it is vital to have a clear definition of what a successful test is, already when setting up the project specifications, and to secure resources for the SAT in the project planning.
When it comes to evaluate the tests, there are always going to be some faults, which leads to the question; How many faults are acceptable?
First of all, there are three levels of faults.
- Minor faults/blemishes. These are faults that don’t affect the functionality. They can be adjusted in time, and are not of real importance.
- Affecting but not obstructive faults. As long as you can find workarounds for these, the system still will deliver what you were expecting.
- Obstructing faults. These faults need to be addressed, because they render the system hard to use.
Sometimes it can be tricky to decide how many faults are acceptable, since enough affecting faults can as a whole become an obstructing fault.The more thoroughly the issue of acceptable number and levels of faults is addressed early in the project, the better the chance that you will find a good path through the test section of the project.
Test setup
There are somes rule of thumb that are advisable to take into consideration when writing test cases.
- The person who writes the test cases should not be the same person who wrote the code. Unintentionally that person will most certainly write test cases what work well with the written code.
- The person who writes the test cases shouldn’t be the one who does the testing. For the same reason as above. If you wrote the tests, you might unintentionally make them work.
- When writing the test cases, you should have clear understanding of the project specifications. The best is to start out with the specifications and to be clear about which test refers to which specifications, and add the test cases to the specifications.
- At least 5-10% negative test cases, where you do things you normally wouldn’t do, like writing information you normally wouldn’t write in a field or click where you shouldn’t click. What happens if you write an erroneous value in a configuration file. Try to get the system to fail. This avoids expensive show-stoppers later on.
Next post
In part 3 of Roll-out plan for telematics, we will discuss, delivery, installation and verification, receiving the project and some matters directly connected to Eco driving, such as creating a statistical base, educating coaches and drivers and setting up coals. To wrap up the post we will discuss wrapping up projects with presentation of target and outcome, creating lessons learned and saving documents.
Disclaimer: The content of this blog post is the author’s opinion and doesn’t reflect the opinion of any other person or organisation.