Remember, you can always opt out of sending any technical or usage data to Firefox. Here’s a step-by-step guide on how to adjust your settings. We also don’t collect category data when you use Private Browsing mode on Firefox.
Here’s the current list of categories we’re using: animals, arts, autos, business, career, education, fashion, finance, food, government, health, hobbies, home, inconclusive, news, real estate, society, sports, tech and travel.
No pr0n?
Inconclusive = pr0n is probably a pretty reliable mapping.
Mozilla wants to be an AI company. This is data collection to support that. Telemetry to understand the user browsing experience doesn’t need to be content-categorized.
I want an open source AI to sort my tabs and understand them and answer my question about their content. But locally running and offline
Unless they’re going to publish their data, AI can’t be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it’s published it’ll just be a Python project with some magic numbers and “put data here” with no indications of what went into data selection or choosing those parameters.
I just want a command line interface to my browser, then I’ll tell my local mixtral 8x7B instance to “look in all my tabs and place all tabs about ‘magnetic loop antennas’ in a new window, order them with the most concrete build instructions first” 100% open source model. I’m looking into the marionette protocol to accomplish this. It would be nice if it came with that out of the box.
What does “open source” mean to you? Just free/noncorporate? Because a “100% open source model” doesn’t really make sense by the traditional definition. The “source” for a model is its data, not the code and not the model itself. Without the data you can’t build the model yourself, can’t modify it, and can’t inspect why it does what it does.
I think the model can be modified with LoRa without tge source data ? In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.
Then it’s open source enough to live in my browser.
You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can’t verify that your modifications aren’t hurting the original capabilities. Fine-tuning (which LoRa is for) isn’t the same thing as modifying a trained network. You’re still generally stuck with their original trained capabilities you’re just reworking the final layer(s) to redirect/tune it towards your problem. You can’t add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can’t rebuild the model with it.
In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.
Then it’s open source enough to live in my browser.
So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don’t get the source, it’s not open source. A binary can be free and non-corporate, but it’s still not source code.
I mean, I would prefer a data set that’s properly open, “the pile” laion, open assistant and a pirate copy is every word, song, video ever written and spoken by man.
But for now I’d be happy to fully control my browser with an offline copy of mixtral or llama
i know they’re a company and they need to float, but this should be opt in not opt out
Opt-in telemetry is useless telemetry, they make it opt-out because its the only way to get representative numbers
deleted by creator
I have not seen a single case where advanced users have the same opinions as the average one
“advanced users” on forums are rarely very representive of users as a whole.
The number of people who actually change their default settings is quite small. Those of us who have these discussions are a distinct minority in the sum userbase.
deleted by creator
And I agree with you.
deleted by creator
We should really be grateful Google is providing a mainstream opensource browser with a great ecosystem of extensions
I see no problem with this logic.
deleted by creator
Manifest V3. Enough said.
This looks fine, the browser just puts your search into a category like “health” or “tech”, then sends the amount of each category completely anonymously.
Also, if you’ve opted out of data collection already that setting applies to this too.
I agree. I am someone who values their privacy and often does not like opt-out style analytics however I also know opt-in skews analytics. The way the searches are only categorized, and they are using Oblivious HTTP keeping IP addresses private makes me A-OK with this.
This is the best take so far, I totally agree
deleted by creator
Importantly, if you have already opted out of sending data to Mozilla, this change will not affect you. It only sends data if you have the setting turned on. It takes just a few clicks to entirely disable it, and Mozilla deletes all record of your browser within 30 days from turning off this feature. If you’re worried about it, do it now, it’s just under Settings > Privacy & Security. Instructions are also linked in the blog post.
First thing I do on every Firefox installation on every device. 3 clicks and most of this nonsense stops.
I’d appreciate Mozilla not doing something like that in the first place, maybe don’t try to build products and focus on the browser. 🤷♂️
I’d just like for these things to be opt-in, not opt-out.
deleted by creator
From what I read in their blog post, nobody is keeping your search history data. It only tracks how often people in general search for things in specific categories, so nobody will be able to learn anything about you specifically from that data.
deleted by creator
I believe there was an experiment making weather data more accessible through the URL bar, e.g. when people start searching for
weather
there, which could be useful. Presumably, telemetry like this can help determine which of such features to prioritise.I could indeed also imagine ads, but then not based on keeping a file on you with all your interests and sharing that with advertisers, but by locally choosing between a couple of categories of ads and showing the ones that are related to your current search, without anyone having to know what you’re actually searching for.
It seems like a profit-driven thing to me. Big piles of anonymized data are worth a pretty penny.
Mozilla famous non-profit status notwithstanding of course
A non-profit can, in fact, profit, but it has specific rules on what it can do with those profits. Tax law is a rabbit hole and I don’t even wanna peer in
Used to work for a non-profit retirement community in a pretty small area; the guy running the joint lived in a $3M “house” with a full 7 car garage.
Mozilla Foundation has a wholly owned subsidiary that is Mozilla Corporation that is for-profit.
For instance the revenue from Google, so they’re the default search engine, is seen by Mozilla Corporation. So things search-related will indeed be part of their for-profit arm.
I’d like to read more on that if you have anything. Seems like too big a loophole ?
It’s not a loophole. As a subsidiary, profits are still invested into the nonprofit and they’re still guided by the Mozilla manifesto. It just lets them do more and raise more funds which would be difficult to do with nonprofit status (selling default search engine for instance). Here’s their original press release when they incorporated Mozilla Corporation in 2005.
It’s technically for profit, but it has a single shareholder: the Foundation. There are no greedy shareholders that can get rich off of that profit.
Of course, employees/board members can be richly compensated, but that’s independent of for-/non-profit status.
Enshitification hits every company, even Mozilla.
Unfortunately Mozilla is being run by a McKinsey consultant.
deleted by creator
They should have put more emphasis on the possible usages for what they find out…
The important part that you should know (and should already be using):
Remember, you can always opt out of sending any technical or usage data to Firefox. Here’s a step-by-step guide on how to adjust your settings.
To improve Firefox based on your needs, understanding how users interact with essential functions like search is key.
Buddy, I just want to type a search term and get results. Stop spying on my search. Your only job is to transfer it to the server and then present the result. I don’t need you to suggest some bullshit to me, or think of “ways to improve search”.
This helps us take a step forward in providing a browsing experience that is more tailored to your needs, without us stepping away from the principles that make us who we are.
No. What the fuck? They are sounding more and more like Google. We need a new alternative that isn’t built from Gecko or Blink or whatever the engines are called.
lol use a fork - I’m sure they’ll have it turned off. Writing a browser engine is non-trivial.
Buddy, I just want to type a search term and get results.
Telemetry can help them do better at providing that. Devs aren’t magical beings, they don’t know what’s working and what’s not unless someone tells them.
That’s like saying the window pane between me and the teller has to understand the conversation and dynamically modify the light between him and I. The window pane’s only job is to let light through. Keep it at that.
No, this analogy would make more sense if it was a matter of recording a large number of interactions between customers and tellers to ensure that the window isn’t interfering with their interactions. Is the window the right size? Can the customer and teller hear each other through it? Is that little hole at the bottom large enough to let through the things they need to physically exchange? If you deploy the windows and then never gather any telemetry you have no idea whether it’s working well or if it could be improved.
You’re describing telemetry to improve the overall performance of the window. That’s very different from what Mozilla: listening in to what is sent between the teller and I. They even gave an example of a trip to Spain and recording it as travel. That’s going way beyond the performance of a window. The teller is probably already doing that. The window operator has no business listening in on that discussion nor recording even a summary of details of the discussion.
The analogy isn’t perfect, no analogy ever is.
In this case the content of the search is all that really matters for the quality of the search. What else would you suggest be recorded, the words-per-minute typing speed, the font size? If they want to improve the search system they need to know how it’s working, and that involves recording the searches.
It’s anonymized and you can opt out. Go ahead and opt out. There’ll still be enough telemetry for them to do their work.
Telemetry doesn’t need topic categorization. This is building a dataset for AI.
That would be a terrible AI.
The example of the “search optimization” they want to improve is Firefox Suggest, which has sponsored results which could be promoted (and cost more) based on predictions of interest based on recent trends of topics in your country. “Users in Belgium search for vacations more during X time of day” is exactly the sort of stuff you’d use to make ads more valuable. “Users in France follow a similar pattern, but two weeks later” is even better. Similarly predicting waves of infection based on the rise and fall of “health” searches is useful for public health, but also for pushing or tabling ad campaigns.
deleted by creator
No one supports telemetry. People support Mozilla, because they are the maintainers of the last standard respecting, open source and independent browse engine.
That’s pretty important as Microsoft and Google etc are trying to take possession of the internet for themselves .
I am a dev and I do not support telemetry
Same. If It’s to exist at all, it should be opt-in and explicit about what it’s doing.
deleted by creator
This isn’t even telemetry, it’s data collection for AI. That they refused to say that let’s you know that they think what they’re doing needs to be obfuscated.
If they refused to say it how do you know its the case? Also how would the data described in the article be useful to an ai, genuine question.
In life, people will frequently say things to you that won’t be the whole truth, but you can figure out what’s actually going on by looking at the context of the situation. This is commonly referred to as “being deceptive” or sometimes just “lying”. Corporate PR and salespeople, the ones who put out this press release, do it regularly.
You don’t need to record content categories of searches to make a good tool for displaying websites, you need it to perform predictions about what users will search for. They’ve already said they wanted to focus on AI and linked to an example of the system they want to improve, it’s their site recommender, complete with sponsored recommendations that could be sold for a higher price if the Mozilla AI could predict that “people in country X will soon be looking for vacations”.
I support anonymous telemetry collected by a small non-profit that helps protect our freedom. Not big tech.
sigh
As much as I hate to say it, Firefox is a privacy mess.
Pocket and Fakespot have very bad privacy policies. The Windows version has a unique Mozilla tracker if you download the installer from the website, and the android version has Google Analytics built in. The existing and new telemetry is a but heavy, but it’s anonymised so it’s really the lesser of the various evils.
My recommendation is LibreWolf & Fennec as alternatives.
All we want is 1990s Google, guys. That’s really all we want. None of this AI BS that kind find a country in Africa that starts with a K, just Google without the evil enshitification layer on top.
I think people forget how awful google pre ~2008 was. Not in terms of the bullshit they do nowadays, just in quality of results really.
I switched from Alta Vista at Google in the early 2000s because the Alta Vista index was stale and full of spam. Google search tools were comparatively primitive (av let you do things like word stem search) but the results were really good.
Huh. I used it pretty much since the start and I certainly don’t recall it being that bad? Like you got a lot of relevant content up front usually.
I feel like you had to learn how to use it, operators and phrasing etc. They dumbed it down with search suggestions and even further by changing search terms to synonyms, and now outright ignoring terms. Height of Internet search was definitely pre 2008. More like 2005.
If you had the right query, yes. But getting there if you didn’t know the exact words in the website used to take a number of attempts and google-fu. By early 2010s this was vastly improved.