A difference between statement and decision coverage

The answer by Paul isn’t quite right, at least I think so (according to ISTQB’s definitions). There’s quite significant difference between statement, decision/branch and condition coverage. I’ll use the sample from the other answer but modified a bit, so I can show all three test coverage examples. Tests written here gives 100% test coverage for … Read more

Recording user data for heatmap with JavaScript

Heatmap analytics turns out to be WAY more complicated than just capturing the cursor coordinates. Some websites are right-aligned, some are left-aligned, some are 100%-width, some are fixed-width-“centered”… A page element can be positioned absolutely or relatively, floated etc. Oh, and there’s also different screen resolutions and even multi-monitor configurations. Here’s how it works in … Read more

how to implement custom metric in keras?

Here I’m answering to OP’s topic question rather than his exact problem. I’m doing this as the question shows up in the top when I google the topic problem. You can implement a custom metric in two ways. As mentioned in Keras docu. import keras.backend as K def mean_pred(y_true, y_pred): return K.mean(y_pred) model.compile(optimizer=”sgd”, loss=”binary_crossentropy”, metrics=[‘accuracy’, … Read more

Calculate Cyclomatic Complexity for Javascript [closed]

I helped write a tool to perform software complexity analysis on JavaScript projects: complexity-report It reports a bunch of different complexity metrics: lines of code, number of parameters, cyclomatic complexity, cyclomatic density, Halstead complexity measures, the maintainability index, first-order density, change cost and core size. It is released under the MIT license and built using … Read more

Do you find cyclomatic complexity a useful measure?

We refactor mercilessly, and use Cyclomatic complexity as one of the metrics that gets code on our ‘hit list’. 1-6 we don’t flag for complexity (although it could get questioned for other reasons), 7-9 is questionable, and any method over 10 is assumed to be bad unless proven otherwise. The worst we’ve seen was 87 … Read more