How do you evaluate reliability in software?
Asked Answered
R

10

7

We are currently setting up the evaluation criteria for a trade study we will be conducting.

One of the criterion we selected is reliability (and/or robustness - are these the same?).

How do you assess that software is reliable without being able to afford much time evaluating it?

Edit: Along the lines of the response given by KenG, to narrow the focus of the question: You can choose among 50 existing software solutions. You need to assess how reliable they are, without being able to test them (at least initially). What tangible metrics or other can you use to evaluate said reliability?

Regalia answered 7/11, 2008 at 15:3 Comment(0)
I
5

Reliability and robustness are two different attributes of a sytem:

Reliability

The IEEE defines it as ". . . the ability of a system or component to perform its required functions under stated conditions for a specified period of time."

Robustness

is robust if it continues to operate despite abnormalities in input, calculations, etc.

So a reliable system performs its functions as it was designed to within constraints; A robust system continues to operate if the unexpected/unanticipated occurs.

If you have access to any history of the software you're evaluating, some idea of reliability can be inferred from reported defects, number of 'patch' releases over time, even churn in the code base.

Does the product have automated test processes? Test coverage can be another indication of confidence.

Some projects using agile methods may not fit these criteria well - frequent releases and a lot of refactoring are expected

Check with current users of the software/product for real world information.

Indignity answered 7/11, 2008 at 15:33 Comment(0)
R
2

It depends on what type of software you're evaluating. A website's main (and maybe only) criteria for reliability might be its uptime. NASA will have a whole different definition for reliability of its software. Your definition will probably be somewhere in between.

If you don't have a lot of time to evaluate reliability, it is absolutely critical that you automate your measurement process. You can use continuous integration tools to make sure that you only ever have to manually find a bug once.

I recommend that you or someone in your company read Continuous Integration: Improving Software Quality and Reducing Risk. I think it will help lead you to your own definition of software reliability.

Roquelaure answered 7/11, 2008 at 15:16 Comment(0)
T
1

Well, the keyword 'reliable' can lead to different answers... When thinking of reliability, I think of two aspects:

  1. always giving the right answer (or the best answer)
  2. always giving the same answer

Either way, I think it boils down to some repeatable tests. If the application in question is not built with a strong suite of unit and acceptance tests, you can still come up with a set of manual or automated tests to perform repeatedly.

The fact that the tests always return the same results will show that aspect #2 is taken care of. For aspect #1 it really is up to the test writers: come up with good tests that would expose bugs or imperfections.

I can't be more specific without knowing what the application is about, sorry. For instance, a messaging system would be reliable if messages were always delivered, never lost, never contain errors, etc etc... a calculator's definition of reliability would be much different.

Tails answered 7/11, 2008 at 15:16 Comment(0)
A
1

Talk to people already using it. You can test yourself for reliability, but it's difficult, expensive, and can be very unreliable depending on what you're testing, especially if you're short on time. Most companies will be willing to put you in contact with current clients if it will help sell you their software and they will be able to give you a real-world idea of how the software handles.

Audwen answered 7/11, 2008 at 15:16 Comment(0)
V
1

As with anything, if you don't have the time to assess something yourself, then you have to rely on the judgement of others.

Vazquez answered 7/11, 2008 at 15:16 Comment(0)
D
1

Reliability is one of three aspects of somethings' effectiveness.. The other two are Maintainability and Availability...

An interesting paper... http://www.barringer1.com/pdf/ARMandC.pdf discusses this in more detail, but generally,

Reliability is based on the probability that a system will break.. i.e., the more likely it is to break, the less reliable it is... In other systems (other than software) it is often measured in Mean Time Between Failure (MTBF) This is a common metric for things like a hard disk... (10000 hrs MTBF) In software, I guess you could measure it in Mean Time between critical system failures, or between application crashes, or between unrecoverable errors, or between errors of any kind that impede or adversely affect normal system productivity...

Maintainability is a measure of how long/how expensive (how many man-hours and/or other resources) it takes to fix it when it does break. In software, you could add to this concept how long/how expensive it is to enhance or extend the software (if that is an ongoing requirement)

Availability is a combination of the first two, and indicates to a planner, if I had a 100 of these things running for ten years, after figuring the failures and how long each failed unit was unavailable while it was being fixed, repaired, whatever, How many of the 100, on average, would be up and running at any one time? 20% , or 98% ?

Dotti answered 7/11, 2008 at 15:59 Comment(0)
B
1

My advice is to follow SRE methodology around SLI, SLO and SLA, best summarized in free ebooks:

Looking at the reliability more from tool perspective you need:

Burlesque answered 28/9, 2020 at 15:8 Comment(0)
L
0

You will have to go into the process by understanding and fully accepting that you will be making a compromise, which could have negative effects if reliability is a key criterion and you don't have (or are unwilling to commit) the resources to appropriately evaluate based on that.

Having said that - determine what the key requirements are that make software reliability critical, then devise tests to evaluate based on those requirements.

Robustness and reliability cross in their relationship to each other, but are not necessarily the same.

If you have a data server that cannot handle more than 10 connections and you expect 100000 connections - it is not robust. It will be unreliable if it dies at > 10 connections. If that same server can handle the number of required connections but intermittently dies, you could say that it is still not robust and not reliable.

My suggestion is that you consult with an experienced QA person who is knowledgeable in the field for the study you will conduct. That person will be able to help you devise tests for key areas -hopefully within your resource constraints. I'd recommend a neutral 3rd party (rather than the software writer or vendor) to help you decide on the key features you'll need to test to make your determination.

Leticialetisha answered 7/11, 2008 at 15:16 Comment(0)
P
0

If you can't test it, you'll have to rely on the reputation of the developer(s) along with how well they followed the same practices on this application as their other tested apps. Example: Microsoft does not do a very good job with the version 1 of their applications, but 3 & 4 are usually pretty good (Windows ME was version 0.0001).

Pasteurizer answered 8/2, 2009 at 4:44 Comment(0)
F
0

Depending on the type of service you are evaluating, you might get reliability metrics or SLI - service level indicators - metrics capturing how well the service/product is doing. For example - process 99% of requests under 1sec.

Based on the SLI you might setup service level agreements - a contract between you and the software provider on what SLO (service level objectives) you would like with the consequences of not them not delivering those.

Flemming answered 4/9, 2016 at 20:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.