You are viewing a single thread.
View all comments View context
31 points
*

This is fairly standard survey design, I believe. They’re not looking to know which features are wanted in general; they want to know their relative popularity. The sets you’re presented are randomised (i.e. we don’t all get to see the same sets), which allows them to get a ranked list of lots of potential features, while only having to run ten survey questions per participant.

If you get a set with three features that everyone likes or dislikes at about the same level, then it doesn’t really matter want you answer: they’ll all end up at the top or bottom of the list, respectively. Because each of those options also get presented as part of different sets to different users, where different answers can win out.

permalink
report
parent
reply
-2 points
*

@Vincent couldn’t finish the survey purely because of the questions suggesting that I should “want” something.

Perhaps if they asked the question differently, they’d have gotten a completed survey from me.

I can’t answer loaded questions.

The samples they get are meaningless if only people who complete the survey are counted.

The fact that I couldn’t select none of them and move forward, meant something: Jerk Mozilla off, or don’t.

I chose not to, and I am a Mozilla user!

#librewolf

permalink
report
parent
reply
2 points

I’m half-way through the survey right now; and rather than continuing, just stalling because I don’t want to rank another set of three options that I don’t care about. Some of the choices already given were like “well, I guess I’ll pick the feature that I’ve at least thought about using once…” but now it’s just a list of 3 things that I don’t want whatsoever. I’m trying to give useful feedback, but I feel like I’m really just giving noise.

permalink
report
parent
reply
1 point
*

@blind3rdeye it’s a load of crap, isn’t it?

The statisticians may disagree, but they fail to understand that forcing “want” into the situation is not a true reflection of what people care about.

If they had just tweaked that one word, it wouldn’t be as much of a steaming pile of turds that it is.

It’s almost like they want people to not finish the survey, so they can have a warped sample.

permalink
report
parent
reply
1 point

It doesn’t seem randomized based on what I have seen

permalink
report
parent
reply
1 point

You mean you’ve taken it multiple times and kept seeing the exact same ten sets?

permalink
report
parent
reply
7 points

The problem with this design is, if people do not care, then they will give random answers, if they don’t have the option to not care. Also this would be important information for Mozilla too, if many people do not care about a specific question. So I feel like they should have done that. But, who am I…

permalink
report
parent
reply
2 points

Any uncertainty would be filtered out by the scale of people answering

permalink
report
parent
reply
2 points

Presumably if people don’t care, they don’t fill in the survey. But as an extra failsafe, they’ve also included the feature “twice as slow as your current browser”. If you rank that high, then your result can probably be discarded.

But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)

permalink
report
parent
reply
3 points

Presumably if people don’t care, they don’t fill in the survey.

That’s not what I said. People care about the survey and they do a favor to Mozilla with it. And if a question does not have the answer they want to give, then it becomes a problem. It’s a different scenario than what you were saying.

But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)

With that attitude and without acknowledging a problem, it won’t get better. If they were the experts, then they wouldn’t need a survey. But its easy to discredit any credit with that dumb argument.

permalink
report
parent
reply
14 points

You’re bang on. It’s called MaxDiff. I use it frequently in my line of work to prioritise product or service messaging with panel data. It’s better in some cases to use Inferred preference rather than stated, but generally good to keep the options comparable in “size” of offer.

I would never interpret a MaxDiff model low end result as “wow, 5% of people want slower browsers.” Instead I’m focusing on the top cluster. As with any model, they’re only ever so accurate. Don’t read into the questions too much.

permalink
report
parent
reply
5 points

Why not just get one big list with like 4 answers:

  • really want
  • want
  • meh
  • don’t want

How is that worse than getting like 10 screens of relative answers?

permalink
report
parent
reply
2 points

Because you’ll end up with ten features that all have overwhelmingly “really want” and “want” answers, and then you still don’t know which of those ten to work on first.

permalink
report
parent
reply
1 point

Really? I’d honestly split them about evenly, maybe even more toward the “don’t want” end of the spectrum.

permalink
report
parent
reply
2 points

I hope my response will get thrown out because I prefer a slower browser over built-in AI based personalization.

permalink
report
parent
reply

Firefox

!firefox@lemmy.ml

Create post

A place to discuss the news and latest developments on the open-source browser Firefox

Community stats

  • 1.8K

    Monthly active users

  • 439

    Posts

  • 4.6K

    Comments

Community moderators