Hello Mozilla Connect Community, I’m Chance York, a User Researcher on the Firefox User Research team. I’m reaching out because our team has created a survey to gather opinions on a handful of browser features, some of which were suggested previously on Mozilla Connect. Your feedback on this survey...
that question seemed directly made by some software dev that shares my opinion lol
i picked 2x slower browser over ai because at least if it is gonna be slower im not gonna be harassed by some amalgamation of every incel shitpost on the Internet
I didn't see that question, but all 3 of those countries seem to rank pretty high on the country demographics for FOSS that I've seen (as in when individual FOSS projects do demographics surveys of their users)
That's why they did it in sets of three. They could just give every user a blank text box for every option, but doing it this way makes it far easier to analyze the data in bulk.
I was doing a political poll just the other day and the third or fourth question was a color question like: “Which of the following is associated most with a ripe banana?”
I debated because I really disliked another option in there (I think it was split-screen for AI or something stupid) and it felt like it was designed to make me not rank something else I didn't like as least desired.
Yup, I stuck that as "least want." I already marked "2x faster performance" as "most want" on another question, so hopefully it all shakes out in the end.
It's an Inferred importance method, as other users have commented it is likely that there are some calibration metrics in there. MaxDiff is the name of the approach if you want to check out more.
Everyone's here complaining about the randomised questions when I'm just curious why location options are "Germany", " Brazil", "USA", or " Somewhere we don't care about".
You folks are really exaggerating. How is this survey weird? The random questions in groups of 3 make it easy to compare 3 features instead of rating 60 different features by most wanted to least wanted. In aggregate from thousands of replies, they can sort all answers.
I feel like most of these people were way over analyzing the questions. No reason to look for in depth meaning of possible answers, just answer them and take them at face value.
Do they publish any of the data from these surveys or use it as an excuse to remove more useful features?
"We listen to our community, so now we're removing about: config access from stable desktop builds to match the mobile version to provide uniform builds, making problems easier to replicate and also provide better security for all. Please use beta or nightly builds for tinkering."
Product managers know what KPI's they want to improve, and its almost never survey sentiment results. The survey will be used to justify projects that improve the KPIs they already have. Best case scenario it helps them choose what to work on first, worst case (and most likely) it doesn't matter what the survey says - it's engineered to justify a pet project.
i.e. Straight out of "how to lie with statistics" - Would you rather drink bleach or add in browser advertisements based on privacy respecting AI categorization of you browsing behavior? ..... The people have spoken and they OVERWHELMINGLY want more monetization in their browser.
When looking for a new web browser, which feature would you prefer most and which would you prefer least?
A color palette that matches Danny DeVito's armpit hair.
Play the theme to Annie at startup.
Take up all computer resources.
I don't want any of those. Can't we just have a browser that filters all of the popups, junk and advertising?
Nope. You can't progress through the survey without picking one thing you really don't want and at least one of two things you couldn't give a shit less about.
Yeah, some of these questions made more sense for mobile than desktop. For example, I want to split tabs on desktop, but I don't on mobile. Likewise, I don't really care about PWAs on mobile (there's usually an app with a better experience), but I do care on desktop because that isn't a thing on my system (Linux).
I think it would've been better had they broken them into groups for mobile and desktop.
The reason I mention PWA for desktop is very different from mobile, but for either, a PWA can live anywhere, it's just a menu less browser with the site's icon vs the firefox one. Handy for sites with no apps that you want to be able to task switch independently for. They dont have to be on a home screen or desktop.
Granted, the wording of the question would make one think so.
I hope like hell the sets of questions were randomized, because if they weren't, they were tweaked by the surveyors beforehand to try and force a particular result.
Like the AI question was paired with some incredibly crappy options like "A browser that runs 2x slower than your current browser". Obviously they want you to click that option as least wanted and leave the AI development alone (if that wasn't a randomized grouping).
Similarly, it looked like they were trying to decide which feature to sacrifice in support of AI dev in later questions, because all 3 would be things I enjoy much more than AI, but I have to rate one as least wanted.
EDIT: OK, thanks for all the responses everyone! Looks like my pairing of AI and 2x slower was just a bad random selection inducing extreme paranoia on my part. Very happy to hear that.
I took the survey and I didn't like how they occasionally put all shitty features in a group and I had to pick one I wanted and then followed it with all good stuff in a group and I had to say I didn't want one.
Bizarre survey yeah but also why is there a mandatory exact age question at the end? Isn't it normal to be able to opt out of demographic questions for surveys? It also lets us say we'd prefer not to say our gender but not our exact year of birth?
We want a true black-dark mode for OLED. This missing feature is holding me back from going 100% Firefox on mobile (I am only using it for horrible websites that are asking fo the ublock origin treatment.
The "Want most" to "Want least" scale is loaded AF.
Where is the option for "I don't want any of these things"?
Edit: Yeah, fuck that. That survey is bullshit. I stopped bothering to give answers due to the multi-choice questions seeming like a way for Mozilla to have a wank about itself.
This is fairly standard survey design, I believe. They're not looking to know which features are wanted in general; they want to know their relative popularity. The sets you're presented are randomised (i.e. we don't all get to see the same sets), which allows them to get a ranked list of lots of potential features, while only having to run ten survey questions per participant.
If you get a set with three features that everyone likes or dislikes at about the same level, then it doesn't really matter want you answer: they'll all end up at the top or bottom of the list, respectively. Because each of those options also get presented as part of different sets to different users, where different answers can win out.
You're bang on. It's called MaxDiff. I use it frequently in my line of work to prioritise product or service messaging with panel data. It's better in some cases to use Inferred preference rather than stated, but generally good to keep the options comparable in "size" of offer.
I would never interpret a MaxDiff model low end result as "wow, 5% of people want slower browsers." Instead I'm focusing on the top cluster. As with any model, they're only ever so accurate. Don't read into the questions too much.
I don't know if the survey questions are loaded, but it feels like they could easily be misinterpreted.
For example, somebody might rank the "organize toolbar buttons and AI chatbots" even if they hate AI's snake oil, and now Mozilla has a data point where they can say "Some of our respondents said they want AI as much as side tabs!"
This seems especially sketchy when the side tab idea came directly from a vocal portion of Mozilla users, while the decision to follow the AI chatbot trend was decided by the same management that overpays their CEO every year.
I didn't even think what the questionnaire was about, and filled the entire thing. It's a rare thing to see for a FOSS project to ask what I'm staring at this very moment, how to make it better. But yes, the questionnaire was a bit oddly structured.