Accuracy! To counter regression dilution, a method is to add a constraint on the statistical modeling.
Regression Redress restrains bias by segregating the residual values.
My article: http://data.yt/kit/regression-redress.html
Accuracy! To counter regression dilution, a method is to add a constraint on the statistical modeling.
Regression Redress restrains bias by segregating the residual values.
My article: http://data.yt/kit/regression-redress.html
How to assess a statistical model?
How to choose between variables?
Pearson's #correlation is irrelevant if you suspect that the relationship is not a straight line.
If monotonic relationship:
"#Spearman’s rho is particularly useful for small samples where weak correlations are expected, as it can detect subtle monotonic trends." It is "widespread across disciplines where the measurement precision is not guaranteed".
"#Kendall’s Tau-b is less affected [than Spearman’s rho] by outliers in the data, making it a robust option for datasets with extreme values."
Ref: https://statisticseasily.com/kendall-tau-b-vs-spearman/
There are some topics that just instantly generate endless debate. The Copenhagen interpretation of Quantum Physics is one.
In RPGs, D&D and such, I think the equivalent topic is Referee, DM objectivity and the use of dice. This article by Bob Kruger describes the issue a little better than I have been able to describe it in the past (and I have tried to explain it so many times without much success).
https://web.archive.org/web/20160520122457/http://www.baen.com/danddmasters
Redressing #Bias: "Correlation Constraints for Regression Models":
Treder et al (2021) https://doi.org/10.3389/fpsyt.2021.615754
"In real life, we weigh the anticipated consequences of the decisions that we are about to make. That approach is much more rational than limiting the percentage of making the error of one kind in an artificial (null hypothesis) setting or using a measure of evidence for each model as the weight."
Longford (2005) http://www.stat.columbia.edu/~gelman/stuff_for_blog/longford.pdf
I'm teaching my first lecture at the new job today, about probabilistic logic programming, probabilistic inference, and (weighted) model counting.
Some of the required reading is a paper (https://eccc.weizmann.ac.il/eccc-reports/2003/TR03-003/index.html) that was written by a great mentor of mine, prof. dr. Fahiem Bacchus. He passed away just over 2 years ago, and I am honoured to keep his memory alive by teaching his ideas to a new generation of students. Hope to do him proud.
Please send good vibes?
Surveys, coincidences, statistical significance
"What Educated Citizens Should Know About Statistics and Probability"
By Jessica Utts, in 2003: https://ics.uci.edu/~jutts/AmerStat2003.pdf via @hrefna
In 2016, the American Statistical Association #ASA made a formal statement that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result".
It also stated that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone".