Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

You are viewing a single thread.
View all comments View context
2 points

That makes sense, thank you. Yes, it’s specifically “test quality” I’m looking to measure, as 100% coverage is effectively meaningless if the tests are poor.

permalink
report
parent
reply
7 points

Yea I’m afraid the only real way to “measure” that is to read through the tests and the code and make a good ol human value judgement on the state of the code and tests. But it won’t give you a number.

permalink
report
parent
reply

Programming

!programming@programming.dev

Create post

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person’s post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you’re posting long videos try to add in some form of tldr for those who don’t want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



Community stats

  • 3K

    Monthly active users

  • 1.7K

    Posts

  • 28K

    Comments