#StackBounty: #distributions #logistic #normal-distribution #pdf #model Relationship between a logistic decision function and Gaussian …

Bounty: 50

Imagine an experiment, in which an observer has to discriminate between two stimulus categories at different contrast levels $|x|$. As $|x|$ becomes lower, the observer will be more prone to making perceptual mistakes. The stimulus category is coded in the sign of $x$. I’m interested in the relationship between two different ways of modeling the observer’s "perceptual noise" based on their choices in a series of stimulus presentations.

The first way would be to fit a logistic function

p_1(x) = frac{1}{1+e^{-betacdot x}}

where $p_1(x)$ is the probability to choose the stimulus category with positive signs ($S^+$). Here, $beta$ would reflect the degree of perceptual noise.

A second way would be to assume that the observer has Gaussian Noise $mathcal{N}(0,sigma)$ around each observation of $x$ and then compute the probability to choose $S^+$ by means of the cumulative probability density function as follows:

p_2(x) = frac{1}{sigmasqrt{2pi}}intlimits_{z=0}^{infty}e^{-frac{(z-x)^2}{2sigma^2}}

In this case, $sigma$ would be an estimate of the perceptual noise.

I have a hunch that both these approaches are intimately related, but I’m not sure how. Is it an underlying assumption of the logistic function that the noise is normally distributed? Is there a formula that describes the relationship between $beta$ of $p_1(x)$ and $sigma$ of $p_2(x)$? Are, in the end, $p_1(x)$ and $p_2(x)$ essentially identical and could $p_1$ be derived from $p_2$?

Get this bounty!!!

#StackBounty: #confidence-interval #p-value #model #model-comparison p value for difference in model outcomes

Bounty: 50

I’ve run two different linear mixed effects models on the same data and got two different estimates for the gradient of the longitudinal variable. e.g.

model 1 has estimate 30 with standard error 5.
model 2 has estimate 40 with standard error 4.

I’m interested in calculating a p value for the probability that the models are different, from the estimate and standard error. How do I do this? I’m aware that checking for overlap in the 95% confidence intervals is a bad idea, and that overlapping 83% CIs are a better test, but would like to be able to quantify this with a p value.

Get this bounty!!!

#StackBounty: #classification #model #group-differences #supervised-learning A method to separate classes while taking variable depende…

Bounty: 100

I have posted a question related to this problem over a year ago and we still were not able to figure this out.

We have two groups, A and B that we want to train on to separate them. Both have numerous observations of “text” so for example:

group A:


group B:


Notably, our original datasets are much larger, with around 4,000 observations for A (with really specific patterns) and around 20,000 for group B.
We want is a model that sees things like:

  • if there is a C at position 1 we see a B on the end in group A (2/3), and we do not see this in group B (0/3)
  • That we only find the motif AAABBB in group A
  • if we see AAABBB we also saw a C at the end in group A (1/3) but we did not see this in group B (0/3)

We tried LDA now (after converting this data to binary vectors), however, this would score each letter independently. To illustrate if group A would have two subgroups:

  • sub1: position1 = A + position10 = C
  • sub2: position2 = A + position15 = B

and both are not common in group B then a method like LDA would also score position1 = A (sub1) + position15 = B (sub2) extremely high even tho they are actually part of different dependencies within group A, so we are looking for an alternative taking care of such dependencies when differentiating groups.

We really hope someone here can help us out!

Get this bounty!!!

#StackBounty: #c# #.net #asp.net-mvc #model Build a Layout ViewModel based in a Id from the URL with MVC

Bounty: 50

I need to build a ViewModel for my Layout’s Web Application, I have tried this solution but it’s not based on a URL’s Id coming from the URL to generate the Layout’s ViewModel.

I tried this but I first had the not existing empty controller error, then I tried to include the id as a parameter but I get the error “Object reference not set to an instance of an object.” because the LayoutColorRGB is not set.

public MobileController(int id)
    Event model = db.Events.Where(s => s.Id == id).FirstOrDefault();

    LayoutVM = new LayoutVM()
        EventId = model.Id,
        LayoutColorRGB = model.LayoutColorRGB,
        SponsorLogoLink = model.SponsorLogoLink,
        SponsorLogoURL = model.SponsorLogoURL

    ViewData["LayoutVM"] = LayoutVM;

Get this bounty!!!

#StackBounty: #dataset #model #computer-vision #object-detection What would be the ideal dataset to train a model to detect advertiseme…

Bounty: 100

I am thinking of the requirements for training a model that would be able to detect if there is any kind of ad in an image.

I know that this sound too broad not just for a question on CV but for the model itself.

There are numerous problems like:

  • The non-standard format of advertisements.
  • The fact that ads can also contain pictures apart from plain text, which apparently will display some objects.
  • Also the fact that in most cases are part of other objects, for example the frontpage of a magazine, the picture of a tv for a given moment, the contents of a billboard, a leaflet on the front windshield of a car, etc…

Still I’d like to make an attempt, so I am thinking what should be the ideal dataset to train a model for this task.

What I’ve come up with is to use a dataset of company logos and train a model to detect logos in picture.

Yet this strategy would eventually lead to more problems like

  • The false positive due to the fact that company logos exist also on the products sold apart from the product advertisements. This particular problem could be solved if there was a way to configuring the model to mark an object(a logo in this case) only if it occupied a portion of the picture larger than X%, since for example a logo on a car is relatively small compared to the car in contrast to the proportions of a car and a company logo in a magazine advertisement.

So, any ideas on which criteria should I take into consideration to create a useful dataset for this task are welcome.

Get this bounty!!!

#StackBounty: #r #model #fitting #splines #derivative How can I fit a spline to data that contains values and 1st/2nd derivitives?

Bounty: 50

I have a dataset that contains let’s say some mesurements for position, speed and acciliration. All come from the same “run”. I could construct a linear system and fit a polynome to all of those measurements.

But can I do the same with splines? What is an ‘R’ way of doing this?

Here is some simulated data I would like to fit:

f <- function(x) 2+x-0.5*x^2+rnorm(length(x), mean=0, sd=0.1)
df <- function(x) 1-x+rnorm(length(x), mean=0, sd=0.3)
ddf <- function(x) -1+rnorm(length(x), mean=0, sd=0.6)

x_f <- runif(5, 0, 5)
x_df <- runif(8, 3, 8)
x_ddf <- runif(10, 4, 9)

data <- data.frame(type=rep('f'), x=x_f, y=f(x_f))
data <- rbind(data, data.frame(type=rep('df'), x=x_df, y=df(x_df)))
data <- rbind(data, data.frame(type=rep('ddf'), x=x_ddf, y=ddf(x_ddf)))

ggplot(data, aes(x, y, color=type)) + geom_point()

m <- lm(data$y ~ bs(data$x, degree=6)) # but I want to fit on f, df, ddf. possible?

enter image description here

Get this bounty!!!

#StackBounty: #regression #model #singular Problematic data for regression model

Bounty: 50

This is a follow-up question to Which model for my data? (testing the differences in slope for three groups).

The solution from there works (big thanks to Heteroskedastic Jim!), but I have a problem with a specific data set. Maybe someone can enlighten me why I get stuck.

Here is an example that works:


Input = ("
Group   Time    Size
         A  1   1.08152
         A  2   1.10589
         A  3   1.13292
         B  1   1.04597
         B  2   1.05763
         B  3   1.07023
         B  4   1.08612
         B  5   1.10059
         B  6   1.11589
         B  7   1.13143
         B  8   1.14741
         B  9   1.16721
         B  10  1.18288
         C  1   1.04777
         C  2   1.06145
         C  3   1.07484
         C  4   1.08908
         C  5   1.10346
         C  6   1.11866
         C  7   1.13375
         C  8   1.14931
         C  9   1.16563
         C  10  1.18294
dat = read.table(textConnection(Input),header=TRUE)

This constructs the model:

(m1 <- gls(Size ~ Time * Group, dat, correlation = corAR1(form = ~ Time | Group), weights = varIdent(form = ~ 1 | I(Group == "A"))))

And this provides me with the p-values for slope differences:

pairs(emtrends(m1, ~ Group, var = "Time", df = Inf, options = get_emm_option("emmeans")))

Now the data set where I get stuck:

Input = ("
Group   Time    Size
         A  1   1.6210
         A  2   2.1118
         A  3   2.6026
         A  4   3.0934
         B  1   0.9162
         B  2   1.2122
         B  3   1.5082
         B  4   1.8042
         B  5   2.1002
         B  6   2.3962
         B  7   2.6922
         B  8   2.9882
         B  9   3.2842
         B  10  3.5802
         C  1   0.82701
         C  2   1.13441
         C  3   1.44181
         C  4   1.74921
         C  5   2.05661
         C  6   2.36401
         C  7   2.67141
         C  8   2.97881
         C  9   3.28621
         C  10  3.59361
dat = read.table(textConnection(Input),header=TRUE)

When I construct the above model with this specific data

(m1 <- gls(Size ~ Time * Group, dat, correlation = corAR1(form = ~ Time | Group), weights = varIdent(form = ~ 1 | I(Group == "A"))))

I get this error message:

Error in glsEstimate(object, control = control) : computed "gls" fit is singular, rank 6

I have tried analyzing the data in SPSS, but I also got stuck there.

So my question is: where is the problem with my data and what can I do to solve it?

Get this bounty!!!

#StackBounty: #machine-learning #model #model-evaluation #validation Relation between uplift and model performance

Bounty: 50

I am trying to compute the uplift for some campaign. For the same I am building model/models. I need to know how much individual model performance should impact my uplift computation? Is there any relation between the two?

In simple words, if x percentage of error occurs in model predictions what percentage of error it will reflect in uplift computation?

Get this bounty!!!

#StackBounty: #hypothesis-testing #model-selection #dataset #model #error How often can a fixed test data be used to evaluate a class o…

Bounty: 50

Suppose I have a fixed training data set $D$ and a fixed test data set $F$ and suppose I have an infinite class of models (for example, for simplicity, indexed by a hyperparameter) that can be trained on data.

If I keep training models using $D$ and then evaluate their performance on $F$, in order to find better and better models, won’t I “illegally” incorporate knowledge from the test data set into my model, since I effectively use the test data set to build a model, instead of only evaluating its generalization performance?
I have a vague feeling I should not use the test data set “too often” (whatever “too often” might mean).

(To make my somewhat vague question concrete, one could imagine the model class to consist of neural networks for binary classification of, say, and each neural network to have different architecture. $D$ and $F$ are large sets of labelled images of flowers of type “A” and type “B” and the loss function is the $ell_2$ norm.)

Get this bounty!!!