Innovation and privacy go hand in hand here at Mozilla. To continue developing features and products that resonate with our users, we’re adopting a new a
Mozilla wants to be an AI company. This is data collection to support that. Telemetry to understand the user browsing experience doesn't need to be content-categorized.
Unless they're going to publish their data, AI can't be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it's published it'll just be a Python project with some magic numbers and "put data here" with no indications of what went into data selection or choosing those parameters.
I just want a command line interface to my browser, then I'll tell my local mixtral 8x7B instance to "look in all my tabs and place all tabs about 'magnetic loop antennas' in a new window, order them with the most concrete build instructions first" 100% open source model. I'm looking into the marionette protocol to accomplish this. It would be nice if it came with that out of the box.
What does "open source" mean to you? Just free/noncorporate? Because a "100% open source model" doesn't really make sense by the traditional definition. The "source" for a model is its data, not the code and not the model itself. Without the data you can't build the model yourself, can't modify it, and can't inspect why it does what it does.
I think the model can be modified with LoRa without tge source data ?
In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.
Then it's open source enough to live in my browser.
You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can't verify that your modifications aren't hurting the original capabilities. Fine-tuning (which LoRa is for) isn't the same thing as modifying a trained network. You're still generally stuck with their original trained capabilities you're just reworking the final layer(s) to redirect/tune it towards your problem. You can't add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can't rebuild the model with it.
In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.
Then it’s open source enough to live in my browser.
So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don't get the source, it's not open source. A binary can be free and non-corporate, but it's still not source code.
I mean, I would prefer a data set that's properly open, "the pile" laion, open assistant and a pirate copy is every word, song, video ever written and spoken by man.
But for now I'd be happy to fully control my browser with an offline copy of mixtral or llama